scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2014"


Journal ArticleDOI
TL;DR: Opening up the idea of pluralism as a driving force in the knowledge economy pushes the organizations in a permanent cumulative process of adaptation and re-creation through innovative means of social interaction in global environments.
Abstract: Purpose – The purpose of this special issue s to cover a substantial range of approach to knowledge management penetrating inquiry that goes beyond intra-organizational learning processes to include inter-organizational perspectives Design/methodology/approach – As pointed out by the literature on various aspects of the knowledge processes within and between organizations, the work has been organized coherently with two “strains” of topics: the first one focused on managerial practices and operative directions of knowledge management, the other one pointed out on applications of knowledge management to inter-firm networks Qualitative as well quantitative papers have been welcomed Findings – Opening up the idea of pluralism as a driving force in the knowledge economy pushes the organizations in a permanent cumulative process of adaptation and re-creation through innovative means of social interaction in global environments Research limitations/implications – The dynamic nature of the field is reflected

322 citations


Proceedings ArticleDOI
11 Nov 2014
TL;DR: An adaptive ranking approach that leverages domain knowledge through functional decompositions of source code files into methods, API descriptions of library components used in the code, the bug-fixing history, and the code change history is introduced.
Abstract: When a new bug report is received, developers usually need to reproduce the bug and perform code reviews to find the cause, a process that can be tedious and time consuming. A tool for ranking all the source files of a project with respect to how likely they are to contain the cause of the bug would enable developers to narrow down their search and potentially could lead to a substantial increase in productivity. This paper introduces an adaptive ranking approach that leverages domain knowledge through functional decompositions of source code files into methods, API descriptions of library components used in the code, the bug-fixing history, and the code change history. Given a bug report, the ranking score of each source file is computed as a weighted combination of an array of features encoding domain knowledge, where the weights are trained automatically on previously solved bug reports using a learning-to-rank technique. We evaluated our system on six large scale open source Java projects, using the before-fix version of the project for every bug report. The experimental results show that the newly introduced learning-to-rank approach significantly outperforms two recent state-of-the-art methods in recommending relevant files for bug reports. In particular, our method makes correct recommendations within the top 10 ranked source files for over 70% of the bug reports in the Eclipse Platform and Tomcat projects.

271 citations


Journal ArticleDOI
TL;DR: An Adaptive Denoising Autoencoder based on an unsupervised domain adaptation method, where prior knowledge learned from a target set is used to regularize the training on a source set to achieve matched feature space representation for the target and source sets while ensuring target domain knowledge transfer.
Abstract: With the availability of speech data obtained from different devices and varied acquisition conditions, we are often faced with scenarios, where the intrinsic discrepancy between the training and the test data has an adverse impact on affective speech analysis. To address this issue, this letter introduces an Adaptive Denoising Autoencoder based on an unsupervised domain adaptation method, where prior knowledge learned from a target set is used to regularize the training on a source set. Our goal is to achieve a matched feature space representation for the target and source sets while ensuring target domain knowledge transfer. The method has been successfully evaluated on the 2009 INTERSPEECH Emotion Challenge’s FAU Aibo Emotion Corpus as target corpus and two other publicly available speech emotion corpora as sources. The experimental results show that our method significantly improves over the baseline performance and outperforms related feature domain adaptation methods.

253 citations


Journal ArticleDOI
TL;DR: This study shows that DM techniques are valuable for knowledge discovery in BAS database; however, solid domain knowledge is still needed to apply the knowledge discovered to achieve better building operational performance.

213 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: A tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction by leveraging relational domain knowledge about entity type information, which is significantly faster than previous approaches and better able to discover new relations missing from the database.
Abstract: While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-theart method by 10 points when used as a subcomponent.

196 citations


Book
02 Apr 2014
TL;DR: This book represents the first time that corporate and academic worlds collaborate integrating research and commercial benefits of knowledge-based configuration.
Abstract: Knowledge-based Configuration incorporates knowledge representation formalisms to capture complex product models and reasoning methods to provide intelligent interactive behavior with the user. This book represents the first time that corporate and academic worlds collaborate integrating research and commercial benefits of knowledge-based configuration. Foundational interdisciplinary material is provided for composing models from increasingly complex products and services. Case studies, the latest research, and graphical knowledge representations that increase understanding of knowledge-based configuration provide a toolkit to continue to push the boundaries of what configurators can do and how they enable companies and customers to thrive.Includes detailed discussion of state-of-the art configuration knowledge engineering approaches such as automated testing and debugging, redundancy detection, and conflict management Provides an overview of the application of knowledge-based configuration technologies in the form of real-world case studies from SAP, Siemens, Kapsch, and more Explores the commercial benefits of knowledge-based configuration technologies to business sectors from services to industrial equipment Uses concepts that are based on an example personal computer configuration knowledge base that is represented in an UML-based graphical language

193 citations


Proceedings Article
21 Jun 2014
TL;DR: A novel method to mine prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference, which offers a novel lifelong learning algorithm for topic discovery.
Abstract: Topic modeling has been commonly used to discover topics from document collections. However, unsupervised models can generate many incoherent topics. To address this problem, several knowledge-based topic models have been proposed to incorporate prior domain knowledge from the user. This work advances this research much further and shows that without any user input, we can mine the prior knowledge automatically and dynamically from topics already found from a large number of domains. This paper first proposes a novel method to mine such prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference. What is also interesting is that this approach offers a novel lifelong learning algorithm for topic discovery, which exploits the big (past) data and knowledge gained from such data for subsequent modeling. Our experimental results using product reviews from 50 domains demonstrate the effectiveness of the proposed approach.

180 citations


Journal ArticleDOI
TL;DR: This study conceptualizes and empirically tests a multilevel model of knowledge exchange in electronic networks of practice (ENoP) that includes the characteristics of knowledge seekers and knowledge contributors as well as their dyadic relationship from an activity-centered language/action point of view.
Abstract: Organizational knowledge is one of the most important assets of an enterprise. Therefore, many organizations invest in enterprise social media (ESM) to establish electronic networks of practice and to foster knowledge exchange among employees. ESM improves interaction transparency and can be regarded as a sociotechnical system that provides a language for communication and symbolic action as well as a better sense of others' social identity. Accordingly, the individual characteristics of knowledge seekers and contributors determine why and how interactions occur. However, existing studies tend to focus only on knowledge contributors' characteristics and to treat knowledge as an object that needs to be transferred. To address this gap, this study conceptualizes and empirically tests a multilevel model of knowledge exchange in electronic networks of practice (ENoP) that includes the characteristics of knowledge seekers and knowledge contributors as well as their dyadic relationship from an activity-centered language/action point of view. A dataset of 15,505 enterprise microblogging messages reveals that knowledge seekers' characteristics and relational factors drive knowledge exchanges in social media-enabled ENoP. Focusing on organizations with knowledge exchanges supported by information technology, our research extends prior findings by providing the first evidence that the communicative act expressed by question-answer pairs impacts the quality of knowledge exchanged.

155 citations


Proceedings ArticleDOI
24 Aug 2014
TL;DR: This research proposes to learn as humans do, i.e., retaining the results learned in the past and using them to help future learning, and mines two forms of knowledge: must-link and cannot-link.
Abstract: Topic modeling has been widely used to mine topics from documents. However, a key weakness of topic modeling is that it needs a large amount of data (e.g., thousands of documents) to provide reliable statistics to generate coherent topics. However, in practice, many document collections do not have so many documents. Given a small number of documents, the classic topic model LDA generates very poor topics. Even with a large volume of data, unsupervised learning of topic models can still produce unsatisfactory results. In recently years, knowledge-based topic models have been proposed, which ask human users to provide some prior domain knowledge to guide the model to produce better topics. Our research takes a radically different approach. We propose to learn as humans do, i.e., retaining the results learned in the past and using them to help future learning. When faced with a new task, we first mine some reliable (prior) knowledge from the past learning/modeling results and then use it to guide the model inference to generate more coherent topics. This approach is possible because of the big data readily available on the Web. The proposed algorithm mines two forms of knowledge: must-link (meaning that two words should be in the same topic) and cannot-link (meaning that two words should not be in the same topic). It also deals with two problems of the automatically mined knowledge, i.e., wrong knowledge and knowledge transitivity. Experimental results using review documents from 100 product domains show that the proposed approach makes dramatic improvements over state-of-the-art baselines.

150 citations


01 Jan 2014
TL;DR: In this article, an extensive review of the literature dealing with the newly evolving field of knowledge for development and its management is carried out using the processtracing method, where the authors see the origins of the emergence of knowledge management for development in the management sciences of the 1950s and 1960s and traces its journey from there to the development studies of the 1990s and 2000s.
Abstract: This paper undertakes an extensive review of the literature dealing with the newly evolving field of knowledge for development and its management. Using the processtracing method, it sees the origins of the emergence of knowledge management for development in the management sciences of the 1950s and 1960s and traces its journey from there to the development studies of the 1990s and 2000s. It maintains that, since its arrival in the domain of development studies, practice and research on the issue are evolving in three dimensions, namely: the micro, the meso and the macro dimensions. The micro dimension concentrates on the individual level, the meso on the organisational level, and the macro on the global systemic level. The first two dimensions constitute the area designated as ‘knowledge management for development’ (KM4D) and the last dimension is designated as ‘knowledge for development’ (K4D). If one adheres to this differentiation, one arrives at three fundamental findings:  While there are plenty of analyses dealing with the micro and meso dimensions, there is a lack of analysis and prognosis for programmatic action on the macro dimension.  Following each of these dimensions in isolation leads one to different programmatic action.  There is, for this reason, a need to balance the three. Based on the above, this paper criticises the monoculturality in the production of global development knowledge that is primarily Western, as well as the inadequacy of existing information and communications technologies (ICT). It argues that the opportunities of joint knowledge creation between the global North and South and of more inclusive knowledge dissemination in the South offered by the ICTs are not being optimally utilised. It then charts a research course that adequately covers the three dimensions mentioned above, while specifying clear research questions aimed at ameliorating the inadequacies of global cooperation in knowledge production and highlighting necessary corrections tailored to specific inadequacies in specific global regions.

143 citations


Journal ArticleDOI
TL;DR: An ontology-based hybrid approach to activity modeling that combines domain knowledge based model specification and data-driven model learning is introduced that has been implemented in a feature-rich assistive living system.
Abstract: Activity models play a critical role for activity recognition and assistance in ambient assisted living. Existing approaches to activity modeling suffer from a number of problems, e.g., cold-start, model reusability, and incompleteness. In an effort to address these problems, we introduce an ontology-based hybrid approach to activity modeling that combines domain knowledge based model specification and data-driven model learning. Central to the approach is an iterative process that begins with “seed” activity models created by ontological engineering. The “seed” models are deployed, and subsequently evolved through incremental activity discovery and model update. While our previous work has detailed ontological activity modeling and activity recognition, this paper focuses on the systematic hybrid approach and associated methods and inference rules for learning new activities and user activity profiles. The approach has been implemented in a feature-rich assistive living system. Analysis of the experiments conducted has been undertaken in an effort to test and evaluate the activity learning algorithms and associated mechanisms.

Journal ArticleDOI
TL;DR: In this paper, a general framework for conceptual knowledge is proposed, which divides conceptual knowledge into two facets: knowledge of general principles and knowledge of the principles underlying procedures, and the definitions that are provided are often vague or poorly operationalized, while the tasks used to measure conceptual knowledge do not always align with theoretical claims about mathematical understanding.

Journal ArticleDOI
TL;DR: The study shows correlations for semantic solution transfer, quantity of ideation, fixation, novelty and quality when developing solutions for transactional problems by means of DbA methods.

Book
26 Mar 2014
TL;DR: This chapter discusses how to Put Knowledge Management into Practice, which involves measuring and Safeguarding knowledge, and strategies for Managing Knowledge.
Abstract: 1 On the Way to a Knowledge Society -- 2 Knowledge in Organisations -- 3 Organisational Forms to Leverage Knowledge -- 4 Knowledge is Human -- 5 Strategies for Managing Knowledge -- 6 Context Specific Knowledge Management Strategies -- 7 How Can Information and Communication Technology Support Knowledge Work -- 8 Measuring and Safeguarding Knowledge -- 9 How to Put Knowledge Management into Practice.

Journal ArticleDOI
Paul Cooper1
TL;DR: Data mining is the application of improved techniques of data organization and storage and analysis to large datasets and has led to the discovery of previously unknown knowledge and relationships.

Journal ArticleDOI
TL;DR: A novel method to efficiently provide better Web-page recommendation through semantic-enhancement by integrating the domain and Web usage knowledge of a website is proposed.
Abstract: Web-page recommendation plays an important role in intelligent Web systems. Useful knowledge discovery from Web usage data and satisfactory knowledge representation for effective Web-page recommendations are crucial and challenging. This paper proposes a novel method to efficiently provide better Web-page recommendation through semantic-enhancement by integrating the domain and Web usage knowledge of a website. Two new models are proposed to represent the domain knowledge. The first model uses an ontology to represent the domain knowledge. The second model uses one automatically generated semantic network to represent domain terms, Web-pages, and the relations between them. Another new model, the conceptual prediction model, is proposed to automatically generate a semantic network of the semantic Web usage knowledge, which is the integration of domain knowledge and Web usage knowledge. A number of effective queries have been developed to query about these knowledge bases. Based on these queries, a set of recommendation strategies have been proposed to generate Web-page candidates. The recommendation results have been compared with the results obtained from an advanced existing Web Usage Mining (WUM) method. The experimental results demonstrate that the proposed method produces significantly higher performance than the WUM method.

Journal ArticleDOI
TL;DR: The findings provide insights as to both the positive and negative effects of domain knowledge on requirements elicitation via interview, as perceived by participants with and without domain knowledge, and show the existence of an actual effect on the course of the interviews.
Abstract: Requirements elicitation is the first activity in the requirements engineering process. It includes learning, surfacing, and discovering the requirements of the stakeholders of the developed system. Various elicitation techniques exist to help analysts elicit the requirements from the different stakeholders; the most commonly used technique is the interview. Analysts may have domain knowledge prior to the elicitation process. Such knowledge is commonly assumed to have positive effects on requirements engineering processes, in that it fosters communication, and a mutual understanding of the needs. However, to a minor extent, some negative effects have also been reported. This paper presents an empirical study in which the perceived and actual effects of prior domain knowledge on requirements elicitation via interviews were examined. The results indicate that domain knowledge affects elicitation via interview in two main aspects: communication with the customers and understanding their needs. The findings provide insights as to both the positive and negative effects of domain knowledge on requirements elicitation via interview, as perceived by participants with and without domain knowledge, and show the existence of an actual effect on the course of the interviews. Furthermore, these insights can be utilized in practice to support analysts in the elicitation process and to form requirements analysis teams. They highlight the different contributions that can be provided by analysts with different levels of domain knowledge in requirements analysis teams and the synergy that can be gained by forming heterogeneous teams of analysts.

Journal ArticleDOI
TL;DR: The largest existing taxonomy of common knowledge is blended with a natural-language-based semantic network of common-sense knowledge, and Multidimensional scaling is applied on the resulting knowledge base for open-domain opinion mining and sentiment analysis.
Abstract: The ability to understand natural language text is far from being emulated in machines. One of the main hurdles to overcome is that computers lack both the common and common-sense knowledge that humans normally acquire during the formative years of their lives. To really understand natural language, a machine should be able to comprehend this type of knowledge, rather than merely relying on the valence of keywords and word co-occurrence frequencies. In this article, the largest existing taxonomy of common knowledge is blended with a natural-language-based semantic network of common-sense knowledge. Multidimensional scaling is applied on the resulting knowledge base for open-domain opinion mining and sentiment analysis.

Journal ArticleDOI
TL;DR: An original Experience Feedback process dedicated to maintenance is suggested, allowing to capitalize on past activities by formalizing the domain knowledge and experiences using a visual knowledge representation formalism with logical foundation and extracting new knowledge thanks to association rules mining algorithms.
Abstract: Knowledge is nowadays considered as a significant source of performance improvement, but may be difficult to identify, structure, analyse and reuse properly. A possible source of knowledge is in the data and information stored in various modules of industrial information systems, like CMMS (Computerized Maintenance Management Systems) for maintenance. In that context, the main objective of this paper is to propose a framework allowing to manage and generate knowledge from information on past experiences, in order to improve the decisions related to the maintenance activity. In that purpose, we suggest an original Experience Feedback process dedicated to maintenance, allowing to capitalize on past activities by (i) formalizing the domain knowledge and experiences using a visual knowledge representation formalism with logical foundation (Conceptual Graphs); (ii) extracting new knowledge thanks to association rules mining algorithms, using an innovative interactive approach; and (iii) interpreting and evaluating this new knowledge thanks to the reasoning operations of Conceptual Graphs. The suggested method is illustrated on a case study based on real data dealing with the maintenance of overhead cranes.

Journal ArticleDOI
TL;DR: This work developed an approach that aims to abstract an event log to the same abstraction level that is needed by the business and is able to deal with n:m relations between events and activities and also supports concurrency.

Journal ArticleDOI
TL;DR: The knowledge management system encapsulates complex logic expressions and ontologies management, making easy for the users obtaining successful results that may organise in their own way, becoming a powerful knowledge management process that combines epistemological and ontological knowledge spirals.
Abstract: A R&I&i process for a knowledge management system development is presented. It transforms different institutions experiences into organisational knowledge applicable to an entire sector, the higher education one specifically. The knowledge management system allows classifying, organising, distributing and facilitating the application of the knowledge generated by the faculty. A study, with more than 1000 system users, reflects that the system helps to the faculty in the way they perform educational innovation activities. The supported model integrates both Nonaka's epistemological and ontological spirals. This allows defining ontologies and used them in order to transform the individual knowledge into organisational one. The knowledge management system encapsulates complex logic expressions and ontologies management, making easy for the users obtaining successful results that may organise in their own way, becoming a powerful knowledge management process that combines epistemological and ontological knowledge spirals to convert individual experiences in educational innovation into organisational knowledge in the higher education sector.

Journal ArticleDOI
TL;DR: The self-monitoring hypothesis as mentioned in this paper states that the knowledge-telling bias may arise due to tutors' limited or inadequate evaluation of their own knowledge and understanding of the material.
Abstract: Prior research has established that learning by teaching depends upon peer tutors’ engagement in knowledge-building, in which tutors integrate their knowledge and generate new knowledge through reasoning. However, many tutors adopt a knowledge-telling bias defined by shallow summarizing of source materials and didactic lectures. Knowledge-telling contributes little to learning with deeper understanding. In this paper, we consider the self-monitoring hypothesis, which states that the knowledge-telling bias may arise due to tutors’ limited or inadequate evaluation of their own knowledge and understanding of the material. Tutors who fail to self-monitor may remain unaware of knowledge gaps or other confusions that could be repaired via knowledge-building. To test this hypothesis, sixty undergraduates were recruited to study and then teach a peer about a scientific topic. Data included tests of recall and comprehension, as well as extensive analyses of the explanations, questions, and self-monitoring that occurred during tutoring. Results show that tutors’ comprehension-monitoring and domain knowledge, along with pupils’ questions, were significant predictors of knowledge-building, which was in turn predictive of deeper understanding of the material. Moreover, tutorial interactions and questions appeared to naturally promote tutors’ self-monitoring. However, despite frequent comprehension-monitoring, many tutors still displayed a strong knowledge-telling bias. Thus, peer tutors appeared to experience more difficulty with self-regulatory aspects of knowledge-building (i.e., responding appropriately to perceived knowledge gaps and confusions) than with self-monitoring. Implications and alternative hypotheses for future research are discussed.

Book ChapterDOI
01 Jan 2014
TL;DR: This work presents a feature extraction framework which exploits different unsupervised feature learning techniques to learning useful feature representation from accelerometer and gyroscope sensor data for human activity recognition.
Abstract: Feature representation has a significant impact on human activity recognition. While the common used hand-crafted features rely heavily on the specific domain knowledge and may suffer from non-adaptability to the particular dataset. To alleviate the problems of hand-crafted features, we present a feature extraction framework which exploits different unsupervised feature learning techniques to learning useful feature representation from accelerometer and gyroscope sensor data for human activity recognition. The unsupervised learning techniques we investigate include sparse auto-encoder, denoising auto-encoder and PCA. We evaluate the performance on a public human activity recognition dataset and also compare our method with traditional features and another way of unsupervised feature learning. The results show that the learned features of our framework outperform the other two methods. The sparse auto-encoder achieves better results than other two techniques within our framework.

Journal ArticleDOI
TL;DR: This work provides a consistency indicator based on a variability-aware type system, mines features at a fine level of granularity, and exploits domain knowledge about the relationship between features when available to support developers in locating features.
Abstract: Software product line engineering is an efficient means to generate a set of tailored software products from a common implementation. However, adopting a product-line approach poses a major challenge and significant risks, since typically legacy code must be migrated toward a product line. Our aim is to lower the adoption barrier by providing semi-automatic tool support-called variability mining -to support developers in locating, documenting, and extracting implementations of product-line features from legacy code. Variability mining combines prior work on concern location, reverse engineering, and variability-aware type systems, but is tailored specifically for the use in product lines. Our work pursues three technical goals: (1) we provide a consistency indicator based on a variability-aware type system, (2) we mine features at a fine level of granularity, and (3) we exploit domain knowledge about the relationship between features when available. With a quantitative study, we demonstrate that variability mining can efficiently support developers in locating features.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed FSS-MGSA has the ability of selecting the discriminating input features correctly and can achieve high accuracy of classification, which is comparable to or better than well-known similar classifier systems.

Journal ArticleDOI
TL;DR: This work states that there is a lack of formal representation of the relevant knowledge domain for neurodegenerative diseases such as Alzheimer's disease.
Abstract: Background Biomedical ontologies offer the capability to structure and represent domain-specific knowledge semantically. Disease-specific ontologies can facilitate knowledge exchange across multiple disciplines, and ontology-driven mining approaches can generate great value for modeling disease mechanisms. However, in the case of neurodegenerative diseases such as Alzheimer's disease, there is a lack of formal representation of the relevant knowledge domain. Methods Alzheimer's disease ontology (ADO) is constructed in accordance to the ontology building life cycle. The Protege OWL editor was used as a tool for building ADO in Ontology Web Language format. Results ADO was developed with the purpose of containing information relevant to four main biological views—preclinical, clinical, etiological, and molecular/cellular mechanisms—and was enriched by adding synonyms and references. Validation of the lexicalized ontology by means of named entity recognition-based methods showed a satisfactory performance (F score=72%). In addition to structural and functional evaluation, a clinical expert in the field performed a manual evaluation and curation of ADO. Through integration of ADO into an information retrieval environment, we show that the ontology supports semantic search in scientific text. The usefulness of ADO is authenticated by dedicated use case scenarios. Conclusions Development of ADO as an open ADO is a first attempt to organize information related to Alzheimer's disease in a formalized, structured manner. We demonstrate that ADO is able to capture both established and scattered knowledge existing in scientific text.

Proceedings ArticleDOI
15 Sep 2014
TL;DR: This paper proposes to mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, and combine that inferred knowledge with the power of automated crawling, and extend the test suite for uncovered/unchecked portions of the web application under test.
Abstract: To test web applications, developers currently write test cases in frameworks such as Selenium. On the other hand, most web test generation techniques rely on a crawler to explore the dynamic states of the application. The first approach requires much manual effort, but benefits from the domain knowledge of the developer writing the test cases. The second one is automated and systematic, but lacks the domain knowledge required to be as effective. We believe combining the two can be advantageous. In this paper, we propose to (1) mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, (2) combine that inferred knowledge with the power of automated crawling, and (3) extend the test suite for uncovered/unchecked portions of the web application under test. Our approach is implemented in a tool called Testilizer. An evaluation of our approach indicates that Testilizer (1) outperforms a random test generator, and (2) on average, can generate test suites with improvements of up to 150% in fault detection rate and up to 30% in code coverage, compared to the original test suite.

Book ChapterDOI
08 Dec 2014
TL;DR: A novel 3D ConvNets model for violence detection in video without using any prior knowledge is developed and results show that the method achieves superior performance without relying on handcrafted features.
Abstract: Whereas most researches are about the action recognition problem, the detection of fights has been comparatively less involved. Such capability may be of great importance. Typical methods mostly rely on domain knowledge to construct complex handcraft features from inputs. On the contrary, deep models can act directly on the raw inputs and automatically extracts features. So we developed in this paper a novel 3D ConvNets model for violence detection in video without using any prior knowledge. To evaluate our method, experimental validation conducted in the context of the Hockey dataset. The results show that the method achieves superior performance without relying on handcrafted features.

Journal ArticleDOI
TL;DR: A synthesis of these interrelated components is proposed, proposing a Global Social Knowledge Management-barrier framework that demonstrates the wide spectrum of possible challenges in globally distributed, social software supported knowledge management activities.

Journal ArticleDOI
TL;DR: Properly targeted faculty development has the potential to expedite the knowledge transformation process for clinical teachers.
Abstract: Context Clinical teachers in medicine face the daunting task of mastering the many domains of knowledge needed for practice and teaching. The breadth and complexity of this knowledge continue to increase, as does the difficulty of transforming the knowledge into concepts that are understandable to learners. Properly targeted faculty development has the potential to expedite the knowledge transformation process for clinical teachers. Methods Based on my own research in clinical teaching and faculty development, as well as the work of others, I describe the unique forms of clinical teacher knowledge, the transformation of that knowledge for teaching purposes and implications for faculty development. Results The following forms of knowledge for clinical teaching in medicine need to be mastered and transformed: (i) knowledge of medicine and patients; (ii) knowledge of context; (iii) knowledge of pedagogy and learners, and (iv) knowledge integrated into teaching scripts. This knowledge is employed and conveyed through the parallel processes of clinical reasoning and clinical instructional reasoning. Faculty development can facilitate this knowledge transformation process by: (i) examining, deconstructing and practising new teaching scripts; (ii) focusing on foundational concepts; (iii) demonstrating knowledge-in-use, and (iv) creating a supportive organisational climate for clinical teaching. Conclusions To become an excellent clinical teacher in medicine requires the transformation of multiple forms of knowledge for teaching purposes. These domains of knowledge allow clinical teachers to provide tailored instruction to learners at varying levels in the context of fast-paced and demanding clinical practice. Faculty development can facilitate this knowledge transformation process.