scispace - formally typeset
Search or ask a question

Showing papers presented at "International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management in 2009"


Book ChapterDOI
06 Oct 2009
TL;DR: The proposed heuristic for extracting the representative subset requires as main arguments a pairwise distance matrix, a representativeness criterion and a distance threshold under which two sequences are considered as redundant or, identically, in the neighborhood of each other.
Abstract: This paper is concerned with the summarization of a set of categorical sequences. More specifically, the problem studied is the determination of the smallest possible number of representative sequences that ensure a given coverage of the whole set, i.e. that have together a given percentage of sequences in their neighbourhood. The proposed heuristic for extracting the representative subset requires as main arguments a pairwise distance matrix, a representativeness criterion and a distance threshold under which two sequences are considered as redundant or, identically, in the neighborhood of each other. It first builds a list of candidates using a representativeness score and then eliminates redundancy. We propose also a visualization tool for rendering the results and quality measures for evaluating them. The proposed tools have been implemented in our TraMineR R package for mining and visualizing sequence data and we demonstrate their efficiency on a real world example from social sciences. The methods are nonetheless by no way limited to social science data and should prove useful in many other domains.

37 citations


Proceedings Article
01 Jan 2009
TL;DR: The results of the explorative multiple-case study investigating concept, implementation and utilization of internal wikis in three Austrian enterprises feel that challenges and benefits of Web 2.0 technologies and applications for the enterprise are just starting to be systematically explored.
Abstract: We present the results of our explorative multiple-case study investigating concept, implementation and utilization of internal wikis in three Austrian enterprises. We collected all data during structured interviews with internal knowledge management experts responsible for the wiki implementation and from online surveys of non-executives employees being users. Our contribution was highly motivated from the continuing discussion on Corporate Web 2.0 and Enterprise 2.0 and unfortunately, the lack of wellgrounded empirical studies by contrast. We feel that challenges and benefits of Web 2.0 technologies and applications for the enterprise are just starting to be systematically explored.

34 citations


Proceedings Article
01 Jan 2009
TL;DR: This work examines seven Enterprise 2.0 tools in detail and derive a unifying multi-dimensional classification and evaluation framework that contributes to a better technical understanding of this emerging family of enterprise applications.
Abstract: There is a growing market for integrated web-based tools to support team collaboration and knowledge management within enterprises. The goal of this paper is to provide a detailed analysis of their concepts and services. We examine seven Enterprise 2.0 tools in detail and derive a unifying multi-dimensional classification and evaluation framework. For each dimension we identify several technical criteria to characterize the functional capabilities of a given tool. Based on this schema we provide a detailed description of the following commercial and open source tools: Alfresco Share, Atlassian Confluence, GroupSwim, Liferay Social Office, Microsoft Office SharePoint Server, Socialtext, Tricia. This work contributes to a better technical understanding of this emerging family of enterprise applications, highlights strengths and weaknesses of existing tools and identifies areas for further system research and development.

21 citations


Book ChapterDOI
06 Oct 2009
TL;DR: Cross-case analysis of enterprise wikis in three Austrian cases reveals commonalities and differences on usage motives, editing behaviour, individual and collective benefits, obstacles, and derives a set of success factors guiding managers in future wiki projects.
Abstract: In this paper we present the results of our explorative multiple-case study investigating enterprise wikis in three Austrian cases Our contribution was highly motivated from the ongoing discussion on Enterprise 20 in science and practice, but the lack of well-grounded empirical research on how enterprise wikis are actually designed, implemented and more importantly utilized We interviewed 7 corporate experts responsible for wiki operation and about 150 employees supposed to facilitate their daily business by using the wikis The combination of qualitative data from the expert interviews and quantitative data from the user survey allows generating very interesting insights Our cross-case analysis reveals commonalities and differences on usage motives, editing behaviour, individual and collective benefits, obstacles, and more importantly, derives a set of success factors guiding managers in future wiki projects

18 citations


Proceedings Article
01 Jan 2009
TL;DR: This work uses a central RDF repository to capture both medical domain knowledge as well as image annotations and understands medical knowledge engineering as an interactive process between the knowledge engineer and the clinician.
Abstract: In the medical domain, semantic image retrieval should provide the basis for the help in decision support and computer aided diagnosis. But knowledge engineers cannot easily acquire the necessary medical knowledge about the image contents. Based on their semantics, we present a set of techniques for annotating images and querying image data sets. The unification of semantic annotation (using a GUI) and querying (using natural dialogue) in biomedical image repositories is based on a unified view of the knowledge acquisition process. We use a central RDF repository to capture both medical domain knowledge as well as image annotations and understand medical knowledge engineering as an interactive process between the knowledge engineer and the clinician. Our system also supports the interactive process between the dialogue engineer and the clinician.

16 citations


Book ChapterDOI
06 Oct 2009
TL;DR: This work presents an approach to ontology transformation based on transformation patterns, which could assist in many semantic tasks (such as reasoning, modularisation or matching), which can be applied on parts of ontologies called ontology patterns.
Abstract: As more and more ontology designers follow the pattern-based approach, automatic analysis of those structures and their exploitation in semantic tools is becoming more doable and important. We present an approach to ontology transformation based on transformation patterns, which could assist in many semantic tasks (such as reasoning, modularisation or matching). Ontology transformation can be applied on parts of ontologies called ontology patterns. Detection of ontology patterns can be specific for a given use case, or generic. We first present generic detection patterns along with some experimental results, and then detection patterns specific for ontology matching. Furthermore, we detail the ontology transformation phase along with an example of transformation pattern based on an alignment pattern.

15 citations


Book ChapterDOI
06 Oct 2009
TL;DR: The latest work on the CrimeFighter toolbox for counterterrorism is presented, which is designed based on past experiences working with investigative data mining, mathematical modeling, social network analysis, graph theory, link analysis, knowledge management, and hypertext.
Abstract: Knowledge about the structure and organization of terrorist networks is important for both terrorism investigation and the development of effective strategies to prevent terrorist attacks. However, except for network visualization, terrorist network analysis remains primarily a manual process. Existing tools do not provide advanced structural analysis techniques that allow for the extraction of network knowledge from terrorist information. This paper presents the latest work on the CrimeFighter toolbox for counterterrorism. The toolbox is designed based on past experiences working with investigative data mining, mathematical modeling, social network analysis, graph theory, link analysis, knowledge management, and hypertext. CrimeFighter consists of a knowledge base and a set of tools that each support different activities in criminal investigation work: data acquisition tools supporting web harvesting, knowledge structuring tools supporting information analysis, explorer tools for searching and exploring the knowledge base, algorithms for data mining, algorithms for visualization, algorithms for social network analysis, etc.

14 citations


Book ChapterDOI
06 Oct 2009
TL;DR: DYNAMO is a tool based on an Adaptive Multi-Agent System (AMAS), which aims at helping ontologists during ontology building and evolution (co-construction process) and is based on terms and lexical relations that have been extracted from text.
Abstract: Manual ontology engineering and maintenance is a difficult task that requires significant effort from the ontologist to identify and structure domain knowledge. Automatic ontology learning makes this task easier, especially through the use of text and natural language processing tools. In this paper, we present DYNAMO, a tool based on an Adaptive Multi-Agent System (AMAS), which aims at helping ontologists during ontology building and evolution (co-construction process). DYNAMO is based on terms and lexical relations that have been extracted from text. DYNAMO provides an AMAS based module to support ontology co-construction. The ontologist interacts with the tool by modifying the ontology. Then the AMAS adapts to these changes and proposes new evolutions to improve the ontology. A first experiment of ontology building shows promising results, and helps us to identify key issues in the agent behaviour that should be solved so that the DYNAMO performs better.

13 citations


Proceedings Article
01 Jan 2009
TL;DR: The latest research on the CrimeFighter toolbox for counterterrorism provides advanced mathematical models and software tools to assist intelligence analysts in harvesting, filtering, storing, managing, analyzing, structuring, mining, interpreting, and visualizing terrorist information.
Abstract: Knowledge about the structure and organization of terrorist networks is important for both terrorism investigation and the development of effective strategies to prevent terrorist attacks Theory from the knowledge management field plays an important role in dealing with terrorist information Knowledge management processes, tools, and techniques can help intelligence analysts in various ways when trying to make sense of the vast amount of data being collected This paper presents the latest research on the CrimeFighter toolbox for counterterrorism CrimeFighter provides advanced mathematical models and software tools to assist intelligence analysts in harvesting, filtering, storing, managing, analyzing, structuring, mining, interpreting, and visualizing terrorist information

11 citations


Book ChapterDOI
06 Oct 2009
TL;DR: This work suggests a methodology to acquire consumer health terminology for creating a Consumer-oriented Medical Vocabulary for Italian that mitigates this gap and could be used in Personal Health Records to improve users’ accessibility to their healthcare data.
Abstract: In Consumer-oriented Healthcare Informatics it is still difficult for laypersons to find, understand, and act on health information. This is due to the communication gap between specialized medical terminology used by healthcare professionals and “lay” medical terminology used by healthcare consumers. So there is a need to create consumer-friendly terminologies reflecting the different ways consumers and patients express and think about health topics. An additional need is to map these terminologies with existing clinically-oriented terminologies. This work suggests a methodology to acquire consumer health terminology for creating a Consumer-oriented Medical Vocabulary for Italian that mitigates this gap. This resource could be used in Personal Health Records to improve users’ accessibility to their healthcare data. In order to evaluate this methodology we mapped “lay” terms with standard specialized terminologies to find overlaps. Results showed that our approach provided many “lay” terms that can be considered good synonyms for medical concepts.

10 citations


Proceedings Article
01 Oct 2009
TL;DR: This research applies Knowledge Management (KM) concepts and methodologies to the DoD acquisition enterprise to increase "Program Self Awareness" and provides the foundation for future development of the System Self-awareness concept and KM tools to support decision making and collaboration in diversified commercial and military applications.
Abstract: : Decades of reform have been largely ineffective at improving the efficiency of the DoD Acquisition System, due in part to the complex processes and stovepipe activities that result in duplication of effort, lack of re-use and limited collaboration on related development efforts. This research applies Knowledge Management (KM) concepts and methodologies to the DoD acquisition enterprise to increase "Program Self Awareness." This research supports the implementation of reform initiatives such as Capability Portfolio Management and Open Systems Architecture which share the common objectives of reducing duplication of effort, promoting collaboration and re-use of components. The DoD Maritime Domain Awareness (MDA) Program will be used as a test case to develop prototype data schemas and apply text and data mining tools to identify duplication and/or gaps in the features of select MDA technologies. This paper will also provide the foundation for future development of the System Self-awareness concept and KM tools to support decision making and collaboration in diversified commercial and military applications.

Book ChapterDOI
06 Oct 2009
TL;DR: This paper presents a multi-level social authentication framework, conducts performance analysis and various security attack analysis, and shows that the framework is much more robust than the existing approaches.
Abstract: Authentication is an important way to protect security. Many web applications such as mobile banking, emails, online shopping require users to provide their credentials to authenticate themselves before they can use the services. Four factors, including password, token, biometrics, social networks, have been proposed for authentication. However, the proposed authentication schemes of the four factors all suffer from different shortcomings. In this paper, we propose a multi-level social authentication framework. Our analysis shows that the framework is much more robust than the existing approaches. More importantly, we minimize the potential privacy disclosure of users during the authentication procedure. We present our framework, conduct performance analysis and various security attack analysis.

Proceedings Article
01 Jan 2009
TL;DR: In this paper, the authors explored perspectives of personal knowledge management (PKM) environment and explored possible future opportunities for PKM and potential benefiting parties are identified, where the central concept of knowledge worker surrounded by several layers of agents such as personal agents, communication agents and so called "engine room" agents like database and network agents.
Abstract: It is frequently mentioned that nowadays is the information age. Knowledge becomes the most important asset for individuals and organizations. And more increasingly knowledge has been viewed as an active area of research. Accordingly there is a need for highly qualified knowledge workers. That in turn implies a necessity for on an effective technology based education system, which provides a foundation for obtaining well educated specialists. Thus perspectives of personal knowledge management (PKM) environment are explored in this context. This environment is not just focused on an individual. Rather it is involving also collaboration for knowledge exchange thus forming communities of practice. The central concept of the paper is knowledge worker surrounded by several layers of agents such as personal agents, communication agents and so called “engine room” agents like database and network agents. The next step related to different types of agents would be to consider that all or part of them could be mobile agents. Possible future opportunities for PKM are explored in this respect and potential benefiting parties are identified.

Proceedings Article
01 Jan 2009
TL;DR: The need for knowledge management (KM) in government is explained, knowledge management requirements for public organizations in the context of electronic government are explored, and available KM solutions are described.
Abstract: Electronic Government requires new approaches to the acquisition, management and distribution of knowledge in the public organizations to transform public service delivery, enable inter-agency cooperation and support for complex decision making activities by both middle level and senior level public officers. This paper explains the need for knowledge management (KM) in government, explores knowledge management requirements for public organizations in the context of electronic government, and describes available KM solutions. In addition, the paper presents and analyzes examples of national and international KM initiatives and characterizes the maturity of current KM practices by governments. The paper concludes by indicating challenges to public sector KM practice and identifying essential elements of a robust KM framework for e-government.

Proceedings Article
01 Jan 2009
TL;DR: This work presents the most urgent challenges for designing access control solutions for semantic-based knowledge federations across multiple companies.
Abstract: Based on ongoing work in the Aletheia project on knowledge federation for the product lifecycle, we present the most urgent challenges for designing access control solutions for semantic-based knowledge federations across multiple companies.

Proceedings Article
06 Oct 2009
TL;DR: This work proposes an indexation and search tool for the ICP knowledge base and presents the main results of this descriptive investigation to validate this work.
Abstract: Our works aim at developing a Web platform to connect various Communities of Practice (CoPs) and to capitalise on all their knowledge. This platform addresses CoPs interested in a same general activity, for example tutoring. For that purpose, we propose a general model of Interconnection of Communities of Practice (ICP), based on the concept of Constellation of Practice (CCP) developed by Wenger (1998). The model of ICP was implemented and has been used to develop the TE-Cap 2 platform which has, as its field of application, educational tutoring activities. In particular, we propose an indexation and search tool for the ICP knowledge base. The TE-Cap 2 platform has been used in real conditions. We present the main results of this descriptive investigation to validate this work.

Proceedings Article
01 Jan 2009
TL;DR: A maturity model is being developed for SMEs to measure and assess the quality of their business processes and this enables the companies to determine their existing status, and to take the necessary actions for the competence development of theirbusiness processes, which should contribute to the attainment of their knowledge management goals.
Abstract: Up to now the isolated tools for quality, business process, and knowledge management can be integrated to develop a suitable structure for SMEs to measure and gradually build up the competence for knowledge processing. A maturity model is being developed for SMEs to measure and assess the quality of their business processes. This enables the companies to determine their existing status, and to take the necessary actions for the competence development of their business processes, which should contribute to the attainment of their knowledge management goals.

Proceedings Article
01 Jan 2009
TL;DR: It is argued that Knowledge cannot be considered as an object such as data are in digital information systems and proposed an empirical model enabling to distinguish the notions of information and knowledge.
Abstract: Although the technological approach of Knowledge Management (KM) is greatly shared, without awareness, when elaborating KM initiative’s strategy, we can confuse the notions of information and knowledge, and disregard the importance of individual’s tacit knowledge used in action. Therefore, to avoid misunderstanding during the strategic orientation phase of a general KM initiative development, it is fundamental to clearly distinguish the notion of information from the notion of knowledge. Further, we insist on the importance to integrate the individual as a component of the Enterprise’s Information and Knowledge System (EIKS). In this paper, we argue that Knowledge cannot be considered as an object such as data are in digital information systems. Consequently, we propose an empirical model enabling to distinguish the notions of information and knowledge. This model shows the role of individual’s interpretative frameworks and tacit knowledge, establishing a discontinuity between information and knowledge. This pragmatic vision needs thinking about the architecture of an Enterprise’s Information and Knowledge System (EIKS), which must be a basis of discussion during the strategic orientation phase of a KM initiative.

Book ChapterDOI
06 Oct 2009
TL;DR: This paper describes DOOR - The Descriptive Ontology of Ontology Relations, to represent, manipulate and reason upon relations between ontologies in large ontology repositories, and describes how DOOR is used in a complete framework (called KANNEL) for detecting and managing semantic relations betweenOntology relations in largeOntology repositories.
Abstract: In the context of Semantic Web Search Engines is becoming crucial to study relations between ontologies to improve the ontology selection task. In this paper, we describe DOOR - The Descriptive Ontology of Ontology Relations, to represent, manipulate and reason upon relations between ontologies in large ontology repositories. DOOR represents a first attempt in describing and formalizing ontology relations. In fact, it does not pretend to be a universal standard structure. Rather, It is intended to be a flexible, easily modifiable structure to model ontology relations in the context of ontology repositories. Here, we provide a detailed description of the methodology used to design the DOOR ontology, as well as an overview of its content. We also describe how DOOR is used in a complete framework (called KANNEL) for detecting and managing semantic relations between ontologies in large ontology repositories. Applied in the context of a large collection of automatically crawled ontologies, DOOR and KANNEL provide a starting point for analyzing the underlying structure of the network of ontologies that is the Semantic Web.

Proceedings Article
01 Jan 2009
TL;DR: This paper proposes a process and implementation that provides for inference-based knowledge discovery, retrieval and navigation on top of digital repositories, based on existing metadata and other semi-structured information and shows that it is possible to produce added-value and meaningful results even when existing descriptions are only flatly organized.
Abstract: Information management, description and discovery, as they are today implemented in digital repositories and digital libraries systems, can surely benefit from the stack of Semantic Web technologies. Most importantly, the ability to infer implied information over declared facts and assertions, based on their rich descriptions and associations, can span new possibilities in how stored assets can be accessed, searched and discovered. In this paper we propose a process and implementation that provides for inference-based knowledge discovery, retrieval and navigation on top of digital repositories, based on existing metadata and other semi-structured information. We show that it is possible to produce added-value and meaningful results even when existing descriptions are only flatly organized and we achieve this with little manual intervention. Our work and results are based on real-world data and applied on the official University of Patras institutional repository that is based on DSpace.

Book ChapterDOI
06 Oct 2009
TL;DR: A new approach to literature-based discovery is presented, which adopts semantic web techniques to measure the relevance between two relationships with specified types that involve a particular entity.
Abstract: The rate of literature publication in life sciences is growing fast, and researchers in the bioinformatics and knowledge discovery fields have been studying how to use the existing literature to discover novel knowledge or generate novel hypothesis. Existing literature-based discovery methods and tools use text-mining techniques to extract non-specified relationships between two concepts. This paper presents a new approach to literature-based discovery, which adopts semantic web techniques to measure the relevance between two relationships with specified types that involve a particular entity. We extract pairs of highly relevant relationships, which we call relationship associations, from semantic graphs representing scientific papers. These relationship associations can be used to help researchers generate scientific hypotheses or create their own semantic graphs for their papers. We present the results of experiments for extracting relationship associations from 392 semantic graphs representing MEDLINE papers.

Proceedings Article
01 Jan 2009
TL;DR: The practicality of the KMDL procedural model and thefit gained from its application are reviewed as it allows the identification of problems and measures to overcome these problems.
Abstract: The Knowledge Modeling and Description La nguage (KMDL) is a method for analyzing knowledge activities in business processes. This contribution presents version 2.1, the latest version of the me thod KMDL in a real life scenario. In the case study pre sented in this contribution we aim to review the practicality of the KMDL procedural model and the b nefit gained from its application as it allows the identification of problems. KMDL analysis delivers the identification of causes as well as measures to overcome these problems, which are highly accommoda ting for process improvements.

Proceedings Article
01 Jan 2009
TL;DR: The architecture and major technical details of WebC-Docs are presented, a highly-customizable toolkit (for the WebComfort CMS platform) that provides document management functionality that can be configured and used in various kinds of scenarios.
Abstract: Content Management Systems (CMS) are typically regarded as critical software platforms for the success of organizational web sites and intranets. Nevertheless, a simple CMS alone does not provide enough support for a typical organization’s requirements, such as document management and storage. On the other hand, Enterprise Content Management (ECM) systems are typically regular web-applications that provide such functionality but without the added advantage of being based on a component-based CMS platform. This paper presents the architecture and major technical details of WebC-Docs, a highly-customizable toolkit (for the WebComfort CMS platform) that provides document management functionality. Due to this, the WebC-Docs toolkit can be configured and used in various kinds of scenarios.

Book ChapterDOI
06 Oct 2009
TL;DR: The preliminary analysis of instructors’ utilization of corporate portal in an academic institution shows that providing tools through corporate portals to support knowledge conversion enhances the effectiveness and efficiency of business processes and employees’ learning, whereas providing tools to support Knowledge Protection improves the effectiveness of organizational processes.
Abstract: This pilot study examines the role of corporate portal on leveraging organizational knowledge management (acquisition, conversion, application and protection). It also explores the business processes benefits (such as efficiency, effectiveness and innovation) and employees benefits (such as learning, adaptability and satisfaction) that result from supporting organizational KM through corporate portal. The preliminary analysis of instructors’ utilization of corporate portal in an academic institution shows that providing tools through corporate portals to support knowledge conversion enhances the effectiveness and efficiency of business processes and employees’ learning, whereas providing tools to support knowledge applications enhances the effectiveness of organizational processes as well as employees’ learning, adaptability and satisfaction. Thus, the analysis indicates that knowledge conversion impacts business processes more than employees, whereas the knowledge application impacts employees more than business processes. Offering tools to support knowledge protection also improves the effectiveness of organizational processes. However, the preliminary analysis shows that knowledge acquisition process has no impact on business processes or employees.

Book ChapterDOI
06 Oct 2009
TL;DR: This work proposes a model of the interconnection of communities of practice (ICP), based on the concept of constellation of communitiesof practice (CCP) developed by Wenger, and applied the model and platform to the case of university tutors.
Abstract: Communities of practice (CoPs) emerge within companies by the way of informal discussions with practitioners who share ideas and help each other to solve problems. Each CoP develops its own practices, reinventing what is certainly being replicated somewhere else, in other companies. Our work aims at connecting CoPs centred on the same general activity and capitalising on all the produced knowledge. For that purpose, we propose a model of the interconnection of communities of practice (ICP), based on the concept of constellation of communities of practice (CCP) developed by Wenger. The model of ICP was implemented and has been used to develop the TE-Cap 2 platform. This platform relies on a specific knowledge management tool and a social networking service. We applied the model and platform to the case of university tutors. The TE-Cap 2 platform has been used in real conditions with tutors from different institutions and countries and we present the main results of this descriptive investigation.

Book ChapterDOI
06 Oct 2009
TL;DR: This work examined the following commercial and open source Enterprise 2.0 tools in detail and derived an unifying multi-dimensional classification and evaluation framework for these concepts and services.
Abstract: In recent years a new class of integrated web-based enterprise tools emerged facilitating team collaboration and knowledge management. In this paper we provide a detailed analysis of their concepts and services. We examined the following commercial and open source Enterprise 2.0 tools in detail: Alfresco Share, Atlassian Confluence, GroupSwim, Jive SBS, Liferay Social Office, Microsoft Office SharePoint Server, Socialtext, Tricia. Thereby, we derived an unifying multi-dimensional classification and evaluation framework. For each dimension we identified several technical criteria to characterize the functional capabilities of a given tool. Based on this schema we conduct a detailed evaluation for each particular tool. This work contributes to a better technical understanding of this emerging family of enterprise applications, highlights strengths and weaknesses of existing tools and identifies areas for further system research and development.

Book ChapterDOI
06 Oct 2009
TL;DR: A novel approach that combines syntactic and lexical-semantic information to identify hypernymic relationships is proposed that outperformed the competitive lexico-syntactic patterns by 7% leading to an F 1-measure of over .91.
Abstract: Hypernym discovery is an essential task for building and extending ontologies automatically. In comparison to the whole Web as a source for information extraction, online encyclopedias provide far more structuredness and reliability. In this paper we propose a novel approach that combines syntactic and lexical-semantic information to identify hypernymic relationships. We compiled semi-automatically and manually created training data and a gold standard for evaluation with the first sentences from the German version of Wikipedia. We trained a sequential supervised learner with a semantically enhanced tagset. The experiments showed that the cleanliness of the data is far more important than the amount of the same. Furthermore, it was shown that bootstrapping is a viable approach to ameliorate the results. Our approach outperformed the competitive lexico-syntactic patterns by 7% leading to an F 1-measure of over .91.

Book ChapterDOI
06 Oct 2009
TL;DR: The overall conclusion is that the decisive output for English is useable data, while the procedure currently exploited by Rudify does not easily carry over to Spanish and Dutch.
Abstract: Rudify is a set of tools used for automatically annotating concepts in an ontology with the ontological meta-properties employed by OntoClean [1]. While OntoClean provides a methodology for evaluating ontological hierarchies based on ontological meta-properties of the concepts in the hierarchy, it does not provide a method for determining the meta-properties of a given concept within an ontology. Rudify has been developed to help bridge this gap, and has been used in the KYOTO project to facilitate ontology development. The general idea behind Rudify is the assumption that a preferred set of linguistic expressions is used when talking about ontological meta-properties. Thus, one can deduce a concept’s meta-properties from the usage of the concept’s lexical representation (LR) in natural language. This paper describes the theory behind Rudify, the development of Rudify, and evaluates Rudify’s output for the rigidity of base concepts in English, Dutch, and Spanish. Our overall conclusion is that the decisive output for English is useable data, while the procedure currently exploited by Rudify does not easily carry over to Spanish and Dutch.

Proceedings Article
06 Oct 2009
TL;DR: In this paper, the authors present a course characterized by a pedagogical organization based upon knowledge management (KM) concepts: knowledge transfer and construction throughout a learning circle and social interactions.
Abstract: Project management education programmes are often proposed in higher education to give students competences in project planning (Gantt's chart), project organizing, human and technical resource management, quality control and also social competences (collaboration, communication), emotional ones (empathy, consideration of the other, humour, ethics), and organizational ones (leadership, political vision, and so on). This training is often given according a training-by-project type of learning with case studies. This article presents one course characterized by a pedagogical organization based upon Knowledge Management (KM) concepts: knowledge transfer and construction throughout a learning circle and social interactions. The course is supported by a rich and complex tutor organization. We have observed this course by using another KM method inspired from KADS with various return of experience formalized into cards and charts. Our intention is, according to the model of Argyris and Schon (Smith, 2001), to gain feedback information about local and global processes and about actors' experience in order to improve the course. This paper describes precisely the course (pedagogical method and tutor activity) and the KM observation method permitting to identify problem to solve. In our case, we observe problem of pedacogical coordination and skills acquisition. We propose to design a metacognitive tool for tutors and students, usable for improving knowledge construction and learning process organisation

Book ChapterDOI
06 Oct 2009
TL;DR: A new measure to select the best consensus data partition, among a variety of consensus partitions, based on the concept of average cluster consistency between each data partition that belongs to the cluster ensemble and a given consensus partition is proposed.
Abstract: Various approaches to produce cluster ensembles and several consensus functions to combine data partitions have been proposed in order to obtain a more robust partition of the data. However, the existence of many approaches leads to another problem which consists in knowing which of these approaches to produce the cluster ensembles’ data and to combine these partitions best fits a given data set. In this paper, we propose a new measure to select the best consensus data partition, among a variety of consensus partitions, based on the concept of average cluster consistency between each data partition that belongs to the cluster ensemble and a given consensus partition. The experimental results obtained by comparing this measure with other measures for cluster ensemble selection in 9 data sets, showed that the partitions selected by our measure generally were of superior quality in comparison with the consensus partitions selected by other measures.