scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2016"


Posted Content
TL;DR: This work introduces a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs, derived from prior domain knowledge.
Abstract: In many machine learning applications, labeled data is scarce and obtaining more labels is expensive. We introduce a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs. These constraints are derived from prior domain knowledge, e.g., from known laws of physics. We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. We are able to train a convolutional neural network to detect and track objects without any labeled examples. Our approach can significantly reduce the need for labeled training data, but introduces new challenges for encoding prior knowledge into appropriate loss functions.

229 citations


Journal ArticleDOI
TL;DR: A set of principles to be implemented routinely as part of any applied research program are identified, including stakeholder mapping prior to the commencement of research programs to identify all stakeholders and the individual, institutional and financial capacities that must be developed to underpin successful knowledge exchange strategies.

221 citations


Journal ArticleDOI
TL;DR: This paper attempts to take advantage of the strength of BIM, ontology and semantic web technology to establish an ontology-based methodology/framework for construction risk knowledge management in BIM environment.

164 citations


Posted Content
TL;DR: A new, publicly available simulation framework, where the simulator, designed for the movie-booking domain, leverages both rules and collected data, and several agents are demonstrated and the procedure to add and test your own agent is detailed.
Abstract: Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for task-oriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.

156 citations


Posted Content
TL;DR: In this paper, a scalable end-to-end approach to learn approximate Nash equilibria without prior domain knowledge is proposed. But this approach is not suitable for large-scale games of imperfect information.
Abstract: Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior work has focused on computing Nash equilibria in a handcrafted abstraction of the domain. In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without prior domain knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Holdem, a poker game of real-world scale, NFSP learnt a strategy that approached the performance of state-of-the-art, superhuman algorithms based on significant domain expertise.

143 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: The Latent Convolutional Skip Chain Conditional Random Field (LC-SC-CRF) time series model learns a set of interpretable and composable action primitives from sensor data and performs well without requiring hand-crafted features or intricate domain knowledge.
Abstract: Fine-grained action recognition is important for many applications of human-robot interaction, automated skill assessment, and surveillance. The goal is to segment and classify all actions occurring in a time series sequence. While recent recognition methods have shown strong performance in robotics applications, they often require hand-crafted features, use large amounts of domain knowledge, or employ overly simplistic representations of how objects change throughout an action. In this paper we present the Latent Convolutional Skip Chain Conditional Random Field (LC-SC-CRF). This time series model learns a set of interpretable and composable action primitives from sensor data. We apply our model to cooking tasks using accelerometer data from the University of Dundee 50 Salads dataset and to robotic surgery training tasks using robot kinematic data from the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our performance on 50 Salads and JIGSAWS are 18.0% and 5.3% higher than the state of the art, respectively. This model performs well without requiring hand-crafted features or intricate domain knowledge. The code and features have been made public.

118 citations


Journal ArticleDOI
TL;DR: Different techniques that facilitate experts’ elicitation process were reviewed in this paper and new techniques should be developed and applied as well to address the uncertainty and improve accuracy of modeling shipping accidents.

112 citations


Journal ArticleDOI
01 Nov 2016
TL;DR: An Online Task Assignment algorithm, which judiciously and efficiently assigns tasks to appropriate workers is designed, and DOCS, a system deployed on the Amazon Mechanical Turk, is built, which performs much better than the state-of-the-art approaches.
Abstract: Crowdsourcing is a new computing paradigm that harnesses human effort to solve computer-hard problems, such as entity resolution and photo tagging. The crowd (or workers) have diverse qualities and it is important to effectively model a worker's quality. Most of existing worker models assume that workers have the same quality on different tasks. In practice, however, tasks belong to a variety of diverse domains, and workers have different qualities on different domains. For example, a worker who is a basketball fan should have better quality for the task of labeling a photo related to 'Stephen Curry' than the one related to 'Leonardo DiCaprio'. In this paper, we study how to leverage domain knowledge to accurately model a worker's quality. We examine using knowledge base (KB), e.g., Wikipedia and Freebase, to detect the domains of tasks and workers. We develop Domain Vector Estimation, which analyzes the domains of a task with respect to the KB. We also study Truth Inference, which utilizes the domain-sensitive worker model to accurately infer the true answer of a task. We design an Online Task Assignment algorithm, which judiciously and efficiently assigns tasks to appropriate workers. To implement these solutions, we have built DOCS, a system deployed on the Amazon Mechanical Turk. Experiments show that DOCS performs much better than the state-of-the-art approaches.

99 citations


Journal ArticleDOI
TL;DR: It is suggested that dense semantic structure facilitates early word learning and real-time novel word recognition in infants, irrespective of overall vocabulary size.
Abstract: Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label-object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition.

97 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: In this paper, the authors focus on the challenging problem of place categorization and semantic mapping on a robot without environment-specific training, and build a system upon a state-of-the-art convolutional network with a series of one-vs-all classifiers.
Abstract: In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot without environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot's behaviour during navigation tasks. The system is made available to the community as a ROS module.

92 citations


Posted Content
TL;DR: Recently proposed models based on generative encoder-decoder neural network architectures are reviewed and it is shown that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.
Abstract: Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.

Proceedings ArticleDOI
11 Apr 2016
TL;DR: This paper matches a large, publicly available Web table corpus to the DBpedia knowledge base and empirically examines the Local Closed World Assumption to determine the maximal number of correct facts that an ideal data fusion strategy could generate and concludes that knowledge-based trust outperforms PageRank- and voting-based fusion.
Abstract: Cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph have gained increasing attention over the last years and are starting to be deployed within various use cases. However, the content of such knowledge bases is far from being complete, far from always being correct, and suffers from deprecation (i.e. population numbers become outdated after some time). Hence, there are efforts to leverage various types of Web data to complement, update and extend such knowledge bases. A source of Web data that potentially provides a very wide coverage are millions of relational HTML tables that are found on the Web. The existing work on using data from Web tables to augment cross-domain knowledge bases reports only aggregated performance numbers. The actual content of the Web tables and the topical areas of the knowledge bases that can be complemented using the tables remain unclear. In this paper, we match a large, publicly available Web table corpus to the DBpedia knowledge base. Based on the matching results, we profile the potential of Web tables for augmenting different parts of cross-domain knowledge bases and report detailed statistics about classes, properties, and instances for which missing values can be filled using Web table data as evidence. In order to estimate the potential quality of the new values, we empirically examine the Local Closed World Assumption and use it to determine the maximal number of correct facts that an ideal data fusion strategy could generate. Using this as ground truth, we compare three data fusion strategies and conclude that knowledge-based trust outperforms PageRank- and voting-based fusion.

Posted Content
TL;DR: Gvnn as mentioned in this paper is a neural network library aimed towards bridging the gap between classic geometric computer vision and deep learning, which can be used to learn invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.
Abstract: We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-to-end learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.

Journal ArticleDOI
TL;DR: TelCoVis is presented, an interactive visual analytics system, which helps analysts leverage their domain knowledge to gain insight into the co-occurrence in urban human mobility based on telco data by means of biclustering techniques that allow analysts to better explore coordinated relationships among different regions and identify interesting patterns.
Abstract: Understanding co-occurrence in urban human mobility (i.e. people from two regions visit an urban place during the same time span) is of great value in a variety of applications, such as urban planning, business intelligence, social behavior analysis, as well as containing contagious diseases. In recent years, the widespread use of mobile phones brings an unprecedented opportunity to capture large-scale and fine-grained data to study co-occurrence in human mobility. However, due to the lack of systematic and efficient methods, it is challenging for analysts to carry out in-depth analyses and extract valuable information. In this paper, we present TelCoVis, an interactive visual analytics system, which helps analysts leverage their domain knowledge to gain insight into the co-occurrence in urban human mobility based on telco data. Our system integrates visualization techniques with new designs and combines them in a novel way to enhance analysts' perception for a comprehensive exploration. In addition, we propose to study the correlations in co-occurrence (i.e. people from multiple regions visit different places during the same time span) by means of biclustering techniques that allow analysts to better explore coordinated relationships among different regions and identify interesting patterns. The case studies based on a real-world dataset and interviews with domain experts have demonstrated the effectiveness of our system in gaining insights into co-occurrence and facilitating various analytical tasks.

Proceedings ArticleDOI
Xu Zhao1, Kirk Rodrigues1, Yu Luo1, Ding Yuan1, Michael Stumm1 
02 Nov 2016
TL;DR: Stitch is substantially different from all prior related tools in that it is capable of constructing a system model of an entire software stack without building any domain knowledge into Stitch; it automatically reconstructs the extensive domain knowledge of the programmers who wrote the code.
Abstract: Understanding the performance behavior of distributed server stacks at scale is non-trivial. The servicing of just a single request can trigger numerous sub-requests across heterogeneous software components; and many similar requests are serviced concurrently and in parallel. When a user experiences poor performance, it is extremely difficult to identify the root cause, as well as the software components and machines that are the culprits.This paper describes Stitch, a non-intrusive tool capable of profiling the performance of an entire distributed software stack solely using the unstructured logs output by heterogeneous software components. Stitch is substantially different from all prior related tools in that it is capable of constructing a system model of an entire software stack without building any domain knowledge into Stitch. Instead, it automatically reconstructs the extensive domain knowledge of the programmers who wrote the code; it does this by relying on the Flow Reconstruction Principle which states that programmers log events such that one can reliably reconstruct the execution flow a posteriori.

Journal ArticleDOI
TL;DR: The authors employed ontological principles for generating an IDM for the precast concrete domain and for linking its MVD with formal information models to achieve the new approach of formalizing domain knowledge and defining accurate data modules for model views.

Book
25 May 2016
TL;DR: This text reviews the fundamental theory and latest methods for including contextual information in fusion process design and implementation, emphasizing the benefit of combining different techniques to overcome the limitations of a single perspective.
Abstract: This text reviews the fundamental theory and latest methods for including contextual information in fusion process design and implementation. Chapters are contributed by the foremost international experts, spanning numerous developments and applications. The book highlights high- and low-level information fusion problems, performance evaluation under highly demanding conditions, and design principles. A particular focus is placed on approaches that integrate research from different communities, emphasizing the benefit of combining different techniques to overcome the limitations of a single perspective. Features: introduces the terminology and core elements in information fusion and context; presents key themes for context-enhanced information fusion; discusses design issues in developing context-aware fusion systems; provides mathematical grounds for modeling the contextual influences in representative fusion problems; describes the fusion of hard and soft data; reviews a diverse range of applications.

Journal Article
TL;DR: This paper proposes a framework for organizing and sharing knowledge gleaned from the Internet and describes a prototypical knowledge sharing system: I-KSS.
Abstract: Knowledge is increasingly recognized by most large business organizations to be a primary resource in sustaining competitive advantage. Knowledge sharing has been identified as a key challenge in knowledge management. Many organizations are using the Internet not only as a pool of enormous information but also for supporting information and knowledge sharing. This paper proposes a framework for organizing and sharing knowledge gleaned from the Internet and describes a prototypical knowledge sharing system: I-KSS.

Journal ArticleDOI
TL;DR: It is proposed that KIOs’ success in transferring and protecting knowledge emerges not directly from formal knowledge transfer mechanisms but from knowledge workers’ improvisation processes.
Abstract: Purpose This paper aims to answer the question: how do knowledge workers’ improvisation processes promote both knowledge transfer and protection in knowledge-intensive organizations (KIOs)? A model is proposed identifying how effective improvisation can strengthen the effect of four specific knowledge transfer mechanisms – an experimental culture, minimal structures, the practice of storytelling and shared mental models – on knowledge transfer inside the organization and knowledge protection outside of it. Design/methodology/approach The paper builds on a knowledge translation perspective to position improvisation as intrinsically intertwined with knowledge transfer and knowledge protection. Findings Improvisation is proposed as the moderating factor enhancing the positive impact of an experimental culture, minimal structures, storytelling practice and shared mental models on knowledge transfer and knowledge protection. Practical implications The paper argues against a “plug-and-play” approach to knowledge transfer that seeks to replicate knowledge without considering how people relate to the routines and the context and highlights to leaders of KIOs the importance of developing awareness, understanding and motivation to improvise to internalize new knowledge being transferred and to create imitation barriers. Originality/value The paper proposes that KIOs’ success in transferring and protecting knowledge emerges not directly from formal knowledge transfer mechanisms but from knowledge workers’ improvisation processes.

Book ChapterDOI
18 Sep 2016
TL;DR: A supervised abstraction method based on behavioral activity patterns that capture domain knowledge on the relation between activities and events is proposed, which shows that state-of-the art process mining methods provide valuable insights on the usage of the system when using the abstracted event log, but fail when usingThe original lower level event log.
Abstract: Process mining techniques analyze processes based on event data. A crucial assumption for process analysis is that events correspond to occurrences of meaningful activities. Often, low-level events recorded by information systems do not directly correspond to these. Abstraction methods, which provide a mapping from the recorded events to activities recognizable by process workers, are needed. Existing supervised abstraction methods require a full model of the entire process as input and cannot handle noise. This paper proposes a supervised abstraction method based on behavioral activity patterns that capture domain knowledge on the relation between activities and events. Through an alignment between the activity patterns and the low-level event logs an abstracted event log is obtained. Events in the abstracted event log correspond to instantiations of recognizable activities. The method is evaluated with domain experts of a Norwegian hospital using an event log from their digital whiteboard system. The evaluation shows that state-of-the art process mining methods provide valuable insights on the usage of the system when using the abstracted event log, but fail when using the original lower level event log.

Journal Article
TL;DR: This paper discusses a case study of a knowledge management strategy for a technical organization, and explains specific knowledge management strategies that are used in an organization.
Abstract: Knowledge management is being discussed as one of the key pillars of an organization's human capital strategy (6). With potential knowledge drains in many organizations, knowledge management is being applied for capturing, leveraging, and sharing knowledge internally and externally. Many papers in the literature discuss various methodologies for developing knowledge management initiatives. However, few explain specific knowledge management strategies that are used in an organization. This paper discusses a case study of a knowledge management strategy for a technical organization.

Journal ArticleDOI
TL;DR: It is argued that mechanism‐based modeling and simulation approaches, where existing domain knowledge is formally integrated using complexity science and quantitative systems pharmacology can be combined with data‐driven analytics to generate predictive actionable knowledge for drug discovery programs, target validation, and optimization of clinical development.
Abstract: Massive investment and technological advances in the collection of extensive and longitudinal information on thousands of Alzheimer patients results in large amounts of data. These "big-data" databases can potentially advance CNS research and drug development. However, although necessary, they are not sufficient, and we posit that they must be matched with analytical methods that go beyond retrospective data-driven associations with various clinical phenotypes. Although these empirically derived associations can generate novel and useful hypotheses, they need to be organically integrated in a quantitative understanding of the pathology that can be actionable for drug discovery and development. We argue that mechanism-based modeling and simulation approaches, where existing domain knowledge is formally integrated using complexity science and quantitative systems pharmacology can be combined with data-driven analytics to generate predictive actionable knowledge for drug discovery programs, target validation, and optimization of clinical development.

Posted Content
17 May 2016
TL;DR: The loose structured domain knowledge base is introduced, which can be built with slight amount of manual work and easily adopted by the Recall gate, so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations.
Abstract: Modeling human conversations is the essence for building satisfying chat-bots with multi-turn dialog ability. Conversation modeling will notably benefit from domain knowledge since the relationships between sentences can be clarified due to semantic hints introduced by knowledge. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a specially designed Recall gate, domain knowledge can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations. In addition, this paper introduces the loose structured domain knowledge base, which can be built with slight amount of manual work and easily adopted by the Recall gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on both two datasets have shown that our approach is promising for modeling human conversations and building key components of automatic chatting systems.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The Machine-to-Machine Measurement (M3) framework is proposed, available online, to rapidly design and develop semantic-based cross-domain IoT applications by reusing as much as possible the domain knowledge (ontologies, datasets and rules).
Abstract: Semantic Web of Things is a new field combining Semantic Web and Internet of Things technologies to be surrounded by smart objects and applications connected to the Web. On one hand, one of the Linked Open Data applications, called DataHub aims at referencing datasets, on the other hand, the Linked Open Vocabularies (LOV) references more than 400 ontologies. However, we discovered that more than 200 ontology-based projects relevant for IoT are not referenced on such tools since domain experts are not aware of them nor of the semantic web best practices. We propose the Machine-to-Machine Measurement (M3) framework, available online, to rapidly design and develop semantic-based cross-domain IoT applications by reusing as much as possible the domain knowledge (ontologies, datasets and rules). To achieve this goal, there are challenging steps: (1) referencing and classifying semantic-based projects relevant for IoT, (2) re-engineering a dataset of interoperable domain rules to deduce high-level abstractions from sensor data, (3) re-engineering an interoperable cross-domain knowledge to combine domains, and (4) assisting developers in designing IoT applications by designing pre-defined templates. In this article, we are focused on referencing and classifying semantic-based projects relevant for IoT by designing the Linked Open Vocabularies for Internet of Things (LOV4IoT) dataset. We also design a dataset of interoperable domain rules to deduce high-level abstractions from sensor data by designing the Sensor-based Linked Open Rules (S-LOR). This work has been applied to two uses cases: (1) redesigning a security and cross-domain knowledge base to assist users in suggesting security mechanisms to secure their applications, and (2) designing semantic-based IoT applications embedded in Android-powered devices.

01 Jan 2016
TL;DR: The scientific knowledge and its social problems is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you for reading scientific knowledge and its social problems. As you may know, people have search numerous times for their favorite readings like this scientific knowledge and its social problems, but end up in malicious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious virus inside their desktop computer. scientific knowledge and its social problems is available in our book collection an online access to it is set as public so you can get it instantly. Our books collection spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the scientific knowledge and its social problems is universally compatible with any devices to read.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: The authors proposed a neural model to translate natural language queries into regular expressions which embody their meaning, which achieves a performance gain of 19.6% over previous state-of-the-art models.
Abstract: This paper explores the task of translating natural language queries into regular expressions which embody their meaning. In contrast to prior work, the proposed neural model does not utilize domain-specific crafting, learning to translate directly from a parallel corpus. To fully explore the potential of neural models, we propose a methodology for collecting a large corpus of regular expression, natural language pairs. Our resulting model achieves a performance gain of 19.6% over previous state-of-the-art models.

Posted Content
TL;DR: In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling, which can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture implicit semantic relevance between sentences within conversations.
Abstract: Modeling human conversations is the essence for building satisfying chat-bots with multi-turn dialog ability. Conversation modeling will notably benefit from domain knowledge since the relationships between sentences can be clarified due to semantic hints introduced by knowledge. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a specially designed Recall gate, domain knowledge can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations. In addition, this paper introduces the loose structured domain knowledge base, which can be built with slight amount of manual work and easily adopted by the Recall gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on both two datasets have shown that our approach is promising for modeling human conversations and building key components of automatic chatting systems.

Journal ArticleDOI
TL;DR: The proposed semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications has been successfully validated in the case study of type 2 diabetes mellitus inpatient management.

Journal ArticleDOI
TL;DR: A part of the findings from this survey is the high dependence of linguistic knowledge base, expert knowledge base and ontology on volatile expert knowledge, which is proposed for knowledge-based business management and other knowledge related applications.

Proceedings ArticleDOI
05 Apr 2016
TL;DR: This paper utilizes the most popular online software development community (StackOverflow) as a source of knowledge for technology decisions to support architecture knowledge management approaches with a more efficient methods for knowledge capturing.
Abstract: Architectural decisions have a big influence on basic properties of a software system, and are difficult to change once implemented. Technology decisions deserve special attention because they are one of the most frequently occurring types of architectural decisions. Architectural knowledge is crucial for good decisions. Current architecture knowledge management approaches try to support architects by offering a rich base of architectural solutions and design decision rules. However, they mostly depend on manually capturing and maintaining the architectural knowledge. In this paper, we utilize the most popular online software development community (StackOverflow) as a source of knowledge for technology decisions to support architecture knowledge management approaches with a more efficient methods for knowledge capturing. We conducted an exploratory study, and followed a qualitative and quantitative content analysis approach. We analysed the posts in this community to identify architecture-relevant and technology-related knowledge, and to classify the posts into different types for the purpose of knowledge structuring. In addition, we evaluated our findings through feedback from practitioners.