scispace - formally typeset
Search or ask a question

Showing papers presented at "International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management in 2011"


Proceedings Article
01 Jan 2011
TL;DR: The way how INTCare, an IDSS developed in the intensive care unit of the Centro Hospitalar do Porto, will accommodate the new functionalities is introduced and solutions are proposed for the most important constraints.
Abstract: Pervasiveness, real-time and online processing are important requirements included in the researchers’ agenda for the development of future generation of Intelligent Decision Support Systems (IDSS) In particular, knowledge discovery based IDSS operating in critical environments such of intensive care, should be adapted to those new requests This paper introduces the way how INTCare, an IDSS developed in the intensive care unit of the Centro Hospitalar do Porto, will accommodate the new functionalities Solutions are proposed for the most important constraints, eg, paper based data, missing values, values outof-range, data integration, data quality The benefits and limitations of the approach are discussed

28 citations


Proceedings Article
01 Jan 2011
TL;DR: This paper provides a reference model for knowledge retention within SMEs that is especially tailored for SMEs to kick-start aKR initiative in their organization as well as can beserved as a template to assess the SMEs’ KR maturity level.
Abstract: Knowledge retention (KR) has been identified as one of the critical factors for maintainingsustainableperformance. However, until recently, most of the existing researches have focused on large organizations,while very few studies have mentioned this issue inSmall and Medium-Sized Enterprises (SMEs). Toredress some of this imbalance in the literature, this paper provides a reference model for knowledgeretention within SMEs. This model includes most ofthe fundamental elements that are believed to becritical for an effective KR implementation. The model is especially tailored for SMEs to kick-start aKRinitiative in their organization as well as can beserved as a template to assess the SMEs’ KR maturity level.

23 citations


Book ChapterDOI
26 Oct 2011
TL;DR: The exchange of “Research Objects” rather than articles proposes a technical solution; however the obstacles are mainly social ones that require the scientific community to rethink its current value systems for scholarship, data, methods and software.
Abstract: A “knowledge turn” is a cycle of a process by a professional, including the learning generated by the experience, deriving more good and leading to advance. The majority of scientific advances in the public domain result from collective efforts that depend on rapid exchange and effective reuse of results. We have powerful computational instruments, such as scientific workflows, coupled with widespread online information dissemination to accelerate knowledge cycles. However, turns between researchers continue to lag. In particular method obfuscation obstructs reproducibility. The exchange of “Research Objects” rather than articles proposes a technical solution; however the obstacles are mainly social ones that require the scientific community to rethink its current value systems for scholarship, data, methods and software.

22 citations


Proceedings Article
01 Jan 2011
TL;DR: An ontology that can represent the semantics of multimedia content, especially its metadata, which can be given an unambiguous meaning is defined, represented by the use of the Adobe XMP, DUBLIN CORE, EXIF, IPTC standards as a starting point.
Abstract: In recent years, we witnessed the diffusion and rise in popularity of software platforms for User Generated Content management, especially multimedia objects. These platforms handle a big quantity of unclassified information. UGC sites (i.e. YouTube and Flickr) do not force the users to perform classification operations and metadata definitions, leaving space to a logic of free-tags (Folksonomies). In the context of an industrial project financed by the Autonomous Region of Sardinia, the idea of producing a Geolocalized Guide based on a Knowledge-base came forth. Such Guide would be able to share georeferenced content with their users, originated from UGC sources as well as from users themselves. For this purpose, we defined an ontology that can represent the semantics of multimedia content, especially its metadata, which in turn can be given an unambiguous meaning. The innovation in this work is represented by the use of the Adobe XMP, DUBLIN CORE, EXIF, IPTC standards as a starting point. In order to unify metadata coming from different sources we defined all laws of mapping toward a structure defined by sources like YouTube and

11 citations


Proceedings Article
28 Oct 2011
TL;DR: Based on literature on knowledge transfer, good practices for the transition phase of an on-going outsourced project are exhibited and it is shown how these good practices are applied on a –real- application case.
Abstract: Outsourcing information system development has become a common practice in companies. Many contributions were proposed for dealing with the management of such projects, and relationship betweenclient and vendor. But little is known concerning the way to manage the change of service provider in anon-going project. Our study concerns the transition from an outgoing service provider to an incoming oneduring an outsourcing development project in a public institution. This transition mainly consists intransferring the project. The transfer involves not only materials (documents and code) but also knowledge.Based on literature on knowledge transfer, we exhibit good practices for the transition phase of an on-goingoutsourced project. We show how we applied these good practices on a –real- application case.

11 citations


Book ChapterDOI
26 Oct 2011
TL;DR: This paper investigates common practices on information sharing and domain ontological modelling to enable service composition of cloud computing service provisioning and exploits the potential of semantic models in supporting service and application linkage by studying links between the complementary services.
Abstract: Cloud computing is not only referred as synonym of on-demand usage of computing resources and services, but as the most promising paradigm to provide infinite scalability by using virtual infrastructures. In the other hand mobile technologies are scaling up to encompass every day a growing number of real and virtual objects in order to provide large-scale data applications, e.g. sensor-based intelligent communications networks, smart grid computing applications, etc. In those complex scenarios, cloud-based computing systems need to cope with diverse service demands in order to enable dynamic composition based on particular user’s demands, variations in collected data broadband, fluctuation of data quality and to satisfy ad-hoc usage for personalized applications. Thus essential characteristics from cloud-native systems i.e. elasticity and multi-tenancy are fundamental requirements into large-scale data processing systems. In this paper we have investigated common practices on information sharing and domain ontological modelling to enable service composition of cloud computing service provisioning. This approach exploits the potential of semantic models in supporting service and application linkage by studying links between the complementary services. By using semantic modelling and knowledge engineering we can enable the composition of services. We discuss what implications this approach imposes on architectural design terms and also how virtual infrastructures and cloud-based systems can benefit from this ontological modelling approach. Research results about information sharing and information modelling by using semantic annotations are discussed. An introductory application scenario is depicted.

11 citations


Book ChapterDOI
26 Oct 2011
TL;DR: This paper proposes a service identification process based on the Design & Engineering Methodology for Organizations (DEMO) and concludes that both client and service provider should be included in this process.
Abstract: Despite the remarkable growing of the services industry in the world economy, the services quality is still affected by gaps identified two decades ago. One of these gaps occurs when the service provider has a perception of what the customer expects that diverges from the real expected service. This difference can be caused by a poor service identification process and, more precisely, by who should be included in this process. Current solutions to identify services still have drawbacks, since they are not customer driven, are web services driven or lack specific processes. In this paper, we propose a service identification process based on the Design & Engineering Methodology for Organizations (DEMO). The proposal was evaluated by comparing two lists of services provided by a Human Resources department: one based on a description given by the head of the department and another based on the customers that use the department services. The differences between the two lists show the gap between the customers’ expectations and the provider perceptions of those expectations. We conclude that both client and service provider should be included in the service identification process.

9 citations


Book ChapterDOI
26 Oct 2011
TL;DR: Two of the meanings of the word “cultivation” that are rather unrelated show a strong dependency, when applied to the domain of code quality:
Abstract: Two of the meanings of the word “cultivation” that are rather unrelated show a strong dependency, when applied to the domain of code quality:

9 citations


Book ChapterDOI
26 Oct 2011
TL;DR: It is demonstrated that the accuracy of a text retrieval system can be improved if it employs a query expansion method based on explicit relevance feedback that expands the initial query with a structured representation instead of a simple list of words.
Abstract: In this paper we have demonstrated that the accuracy of a text retrieval system can be improved if we employ a query expansion method based on explicit relevance feedback that expands the initial query with a structured representation instead of a simple list of words. This representation, named a mixed Graph of Terms, is composed of a directed and an a-directed subgraph and can be automatically extracted from a set of documents using a method for term extraction based on the probabilistic Topic Model. The evaluation of the method has been conducted on a web repository collected by crawling a huge number of web pages from the website ThomasNet.com. We have considered several topics and performed a comparison with a baseline and a less complex structure that is a simple list of words.

9 citations


Proceedings Article
09 Aug 2011
TL;DR: In this paper, the accuracy of the mechanisms used to extract tokens from the non-natural language sections of WSDL files directly affects the performance of these techniques, because some of them can be more sensitive to noise.
Abstract: Most web service discovery systems use keyword-based search algorithms and, although partially successful, sometimes fail to satisfy some users information needs. This has given rise to several semantics-based approaches that look to go beyond simple attribute matching and try to capture the semantics of services. However, the results reported in the literature vary and in many cases are worse than the results obtained by keyword-based systems. We believe the accuracy of the mechanisms used to extract tokens from the non-natural language sections of WSDL files directly affects the performance of these techniques, because some of them can be more sensitive to noise. In this paper three existing tokenization algorithms are evaluated and a new algorithm that outperforms all the algorithms found in the literature is introduced.

7 citations


Book ChapterDOI
26 Oct 2011
TL;DR: A comparative and explorative study on the performance of various existing proximity measures when applied to spectral clustering algorithm indicates that the commonly used Euclidean distance measure is not always suitable, specifically in domains where the data is highly imbalanced and the correct clustering of boundary objects are critical.
Abstract: Spectral clustering algorithms recently gained much interest in research community. This surge in interest is mainly due to their ease of use, their applicability to a variety of data types and domains as well as the fact that they very often outperform traditional clustering algorithms. These algorithms consider the pair-wise similarity between data objects and construct a similarity matrix to group data into natural subsets, so that the objects located in the same cluster share many common characteristics. Objects are then allocated into clusters by employing a proximity measure, which is used to compute the similarity or distance between the data objects in the matrix. As such, an early and fundamental step in spectral cluster analysis is the selection of a proximity measure. This choice also has the highest impact on the quality and usability of the end result. However, this crucial aspect is frequently overlooked. For instance, most prior studies use the Euclidean distance measure without explicitly stating the consequences of selecting such measure. To address this issue, we perform a comparative and explorative study on the performance of various existing proximity measures when applied to spectral clustering algorithm. Our results indicate that the commonly used Euclidean distance measure is not always suitable, specifically in domains where the data is highly imbalanced and the correct clustering of boundary objects are critical. Moreover, we also noticed that for numeric data type, the relative distance measures outperformed the absolute distance measures and therefore, may boost the performance of a clustering algorithm if used. As for the datasets with mixed variables, the selection of distance measure for numeric variable again has the highest impact on the end result.

Proceedings Article
01 Jan 2011
TL;DR: The novices in this study have high trust in the experienced employees knowledge and skills–though not necessarily in them as persons which, according to the results, can prevent knowledge sharing even when the circumstances for sharing are favourable.
Abstract: This paper explores the processes connected to information and knowledge sharing in the context of expert work in an organization undergoing technical succession. The qualitative empirical research was conducted among six senior-junior pairs participating in the technical succession in the studied company. According to the results, factors affecting knowledge sharing between generations are interaction, expectations, dispositions and circumstances which includes time for sharing and proximity. Knowledge sharing, which may include both transfer and building, happens in eight phases. Informal interaction is of high importance underlining an open information culture between generations and constitutes a prerequisite for sharing experts’ work related knowledge. Further, the novices in this study have high trust in the experienced employees knowledge and skills–though not necessarily in them as persons which, according to the results, can prevent knowledge sharing even when the circumstances for sharing are favourable. An important aspect in the study is how influential the novices’ conception of the work task is on knowledge sharing. Depending on whether they define their work as development or maintenance work determines the nature of knowledge shared and how it is shared.

Proceedings Article
01 Jan 2011
TL;DR: An ontology-based framework for capitalizing knowledge for reuse in CoPEs is proposed and it is shown through an example of use how semantics can contribute to the management of the tacit knowledge that the community members own and therefore to the improvement of the learning process in coPEs.
Abstract: Knowledge management in Communities of Practice of E-learning (CoPEs) is challenged by several issues: the complexity of knowledge, considered as interdisciplinary (psycho-cognitive, pedagogic, softwareoriented, and hardware-oriented), the difficulty to access and reuse that knowledge, and the complexity of the knowledge capitalization process. Most of the knowledge exchanged is mainly tacit, based on direct communication between members, and therefore needs to be elicited and represented in a formal way to be capitalized. Explicit knowledge is generally shared and accessible through the CoPE’s repositories. However, it is not always well elicited and organized. In this paper, we propose an ontology-based framework for capitalizing knowledge for reuse in CoPEs. We show through an example of use how semantics can contribute to the management of the tacit knowledge that the community members own and therefore to the improvement of the learning process in CoPEs.

Proceedings Article
26 Oct 2011
TL;DR: The first field application of a Collaboration Maturity Model (Col-MM) is reported on through an automotive industry field study, intended to be sufficiently generic to be applied to any type of collaboration and useable to assess the collaboration maturity of a given team holistically through self-assessments performed by practitioners.
Abstract: Trends like globalization and increased product and service complexity have pushed organizations to use more distributed, cross-disciplinary, cross-cultural, virtual teams. In this context, the quality of collaboration directly affects the quality of an organization’s outcomes and performance. This paper reports on the first field application of a Collaboration Maturity Model (Col-MM) through an automotive industry field study. This model was empirically developed during a series of Focus Group meetings with professional collaboration experts to maximize its relevance and practical applicability. Col-MM is intended to be sufficiently generic to be applied to any type of collaboration and useable to assess the collaboration maturity of a given team holistically through self-assessments performed by practitioners. The purpose of the study reported in this paper was to apply and evaluate the use of the Col-MM in practice. The results should be of interest to academic researchers and information systems practitioners interested in collaboration maturity assessment. The research contributes to the collaboration performance and (IT) project management literature, theory and practice through a detailed case study that develops artefacts that provide evidence of proof of value and proof of use in the field.

Proceedings Article
01 Jan 2011
TL;DR: The Common Body of Knowledge (CBK) as mentioned in this paper collects, integrates, and structures knowledge from the different disciplines based on an ontology that allows one to semantically enrich content to be able to query the CBK.
Abstract: The discipline of engineering secure software and services brings together researchers and practitioners from software, services, and security engineering. This interdisciplinary community is fairly new, it is still not well integrated and is therefore confronted with differing perspectives, processes, methods, tools, vocabularies, and standards. We present a Common Body of Knowledge (CBK) to overcome the aforementioned problems. We capture use cases from research and practice to derive requirements for the CBK. Our CBK collects, integrates, and structures knowledge from the different disciplines based on an ontology that allows one to semantically enrich content to be able to query the CBK. The CBK heavily relies on user participation, making use of the Semantic MediaWiki as a platform to support collaborative writing. The ontology is complemented by a conceptual framework, consisting of concepts to structure the knowledge and to provide access to it, and a means to build a common terminology. We also present organizational factors covering dissemination and quality assurance.

Proceedings Article
01 Jan 2011
TL;DR: This paper presents a concept for a value-oriented framework for Knowledge Management Systems in small and medium enterprises to allow them to choose suiting Knowledge Management systems according to their business objectives and the business value to be expected from the applied solution.
Abstract: Knowledge management and Knowledge Management Systems have been around for many years. They can be considered well established in larger enterprises, yet which effect they do have on small and medium enterprises is still not fully clarified. By our own research we recognized, that especially with regard to Knowledge Management Systems a solid foundation for decision making in SME is missing. This stems mostly from the unclear value such a system can add to the enterprise. To address this this paper is to present our concept for a value-oriented framework for Knowledge Management Systems in small and medium enterprises to allow them to choose suiting Knowledge Management Systems according to their business objectives and the business value to be expected from the applied solution. The framework to be presented, accordingly consists of three dimensions: Knowledge services, business value of IT and their interrelations.

Book ChapterDOI
26 Oct 2011
TL;DR: This paper adds key insights gleaned from additional in-depth review of relevant literature and data analysis to investigate the proposition that the interaction of four “canonical forces” affects both internal GEE cooperation and SoS-level operational effectiveness.
Abstract: We coined the term “government extended enterprise” (GEE) to describe sets of effectively autonomous government organizations that must cooperate voluntarily to achieve desired GEE-level outcomes A GEE is, by definition, a complex dynamical system of systems (SoS) Our continuing research investigates the proposition that the interaction of four “canonical forces” affects both internal GEE cooperation and SoS-level operational effectiveness, changing the GEE’s status as indicated by the "SoS differentiating characteristics" detailed by Boardman and Sauser Three prior papers have described the concepts involved, postulated the relationships among them, discussed the n-player, iterated "Stag Hunt" methodology applied to execute a real proof-of-concept case (the US Counterterrorism Enterprise’s response to the Christmas Day Bomber) in an agent-based model, and presented preliminary conclusions from testing of the simulation This paper adds key insights gleaned from additional in-depth review of relevant literature and data analysis

Proceedings Article
01 Jan 2011
TL;DR: The paper proposes an approach based on application of ontology management technology to the tasks of product configuration and product code design in an industrial company that has more than 300 000 customers in 176 countries.
Abstract: The paper proposes an approach based on application of ontology management technology to the tasks of product configuration and product code design in an industrial company. The development of the approach includes usage of Web-services for industrial environment representation and context-based information processing to facilitate the product configuration task. The research is illustrated via a case study for an industrial company that has more than 300 000 customers in 176 countries. A use case for ontology-driven product configuration demonstrates the applicability of the approach.

Book ChapterDOI
26 Oct 2011
TL;DR: This work proposes a method to identify and prioritize external variables that impact the execution of specific activities of a process, applying competitive intelligence concepts and data mining techniques and has evaluated the method in a case study, which showed how the discovered variables influencedspecific activities of the process.
Abstract: Successful organizations are those able to identify and respond appropriately to changes in their internal and external environments The search for flexibility is linked to the need for the organization to adapt to frequent and exceptional changes in scenarios imposed to them Those disruptions in routine should be reflected in business processes, in a sense that processes must be adjusted to such variations, taking into account both internal and external variables, typically referred in the literature as the context of the process In particular, defining the relevance of external context for the execution of a process is still an open research issue We propose a method to identify and prioritize external variables that impact the execution of specific activities of a process, applying competitive intelligence concepts and data mining techniques We have evaluated the method in a case study, which showed how the discovered variables influenced specific activities of the process

Proceedings Article
26 Oct 2011
TL;DR: This paper addressed the problem of merging geographical information systems and building information model by presenting a new architecture based on a semantic multi-representation of heterogeneous information.
Abstract: Interoperability of information systems is partially resolved due to many standards such as networks protocols, XML derived languages and object oriented programming. Nevertheless, semantic heterogeneity limits collaborative works and interoperability. Despite ontology and other semantic technics, the binding of heterogeneous information systems requires new technics of managing and displaying information according to the semantic representation of each stakeholder of the collaboration. In this paper we addressed the problem of merging geographical information systems and building information model. The way to achieve this goal must solve several heterogeneity problems due to the data life cycle, the data temporality, the binding between 2D geo-referenced modelling and 3D geometric models or problem of scalability for real-time 3D display from remote server for managing a real environment of several million m2. To bridge this gap, we present a new architecture based on a semantic multi-representation of heterogeneous information.

Book ChapterDOI
26 Oct 2011
TL;DR: A reverse engineering process to extract static and behavioral features from malware based on an assumption that behavior of a malware can be revealed by executing it and observing its effects on the operating environment and preliminary results indicate that BLEM2 rules may provide interesting insights for essential feature identification.
Abstract: Malware detection is a major challenge in today’s software security profession. Works exist for malware detection based on static analysis such as function length frequency, printable string information, byte sequences, API calls, etc. Some works also applied dynamic analysis using features such as function call arguments, returned values, dynamic API call sequences, etc. In this work, we applied a reverse engineering process to extract static and behavioral features from malware based on an assumption that behavior of a malware can be revealed by executing it and observing its effects on the operating environment. We captured all the activities including registry activity, file system activity, network activity, API Calls made, and DLLs accessed for each executable by running them in an isolated environment. Using the extracted features from the reverse engineering process and static analysis features, we prepared two datasets and applied data mining algorithms to generate classification rules. Essential features are identified by applying Weka’s J48 decision tree classifier to 1103 software samples, 582 malware and 521 benign, collected from the Internet. The performance of all classifiers are evaluated by 5-fold cross validation with 80-20 splits of training sets. Experimental results show that Naive Bayes classifier has better performance on the smaller data set with 15 reversed features, while J48 has better performance on the data set created from the API Call data set with 141 features. In addition, we applied a rough set based tool BLEM2 to generate and evaluate the identification of reverse engineered features in contrast to decision trees. Preliminary results indicate that BLEM2 rules may provide interesting insights for essential feature identification.

Proceedings Article
26 Oct 2011
TL;DR: A new way to manage the lifecycle of an ontology incorporating both versioning tools and evolution process is presented, called VersionGraph, integrated in the source ontology since its creation in order to make it possible to evolve and to be versioned.
Abstract: Ontologies are built on systems that conceptually evolve over time. In addition, techniques and languages for building ontologies evolve too. This has led to numerous studies in the field of ontology versioning and ontology evolution. This paper presents a new way to manage the lifecycle of an ontology incorporating both versioning tools and evolution process. This solution, called VersionGraph, is integrated in the source ontology since its creation in order to make it possible to evolve and to be versioned. Change management is strongly related to the model in which the ontology is represented. Therefore, we focus on the OWL language in order to take into account the impact of the changes on the logical consistency of the ontology like specified in OWL DL.

Proceedings Article
01 Jan 2011
TL;DR: The drivers and barriers of participation are presented as a design framework for an online learning community and the question of how to encourage use of such a platform, and its evolution into a self-sustaining community is asked.
Abstract: In this paper we report our efforts to elicit an understanding of drivers and barriers for participation in a Web2.0 online community platform to support the unique collection of virtual collaboration requirements inherent in inter-organization, cross-cultural, and cross-discipline team environments that comprise the Atlantis community. Atlantis is a grant program to stimulate and fund the organization of dual degree master programs between consortia of European and American Universities. The key challenge in this project is neither the analysis nor construction of the online community platform (though neither is in itself a trivial task), but rather the question of how to encourage use of such a platform, and its evolution into a self-sustaining community. We report our findings from a workshop, interviews and a survey to gain understanding in the drivers and barriers of participation. The drivers and barriers are then presented as a design framework for an online learning community.

Proceedings Article
01 Jan 2011
TL;DR: The goal of the research is to design the concept and architecture for new collaborative e-business model in order to join knowledge management and intellectual capital under the approach of engineering systems and build a supply design to understand the collaboration among people, processes and systems inside a holistic approach.
Abstract: The goal of the research is to design the concept and architecture for new collaborative e-business model in order to join knowledge management and intellectual capital under the approach of engineering systems. A collaborative system is designed as a part of new assets, called intellectual capital. We build a supply design to understand the collaboration among people, processes and systems inside a holistic approach. To do that the paper presents two models: the first model is the integration among intellectual capital, collaborative systems and e-business; and then the second model is designed to understand the behaviour of software agents. Processes are analyzed on their value, for example we need to know if the results of a process may be important for someone in order to resolve a specific problem. This concept will be used in one controlled environment and to do that we need some functions of the software agents to complete one specific process and evaluate some alternatives for the best solution. As a result we get the new process of a collaborative system, also we define the ability to collaborate and leverage the knowledge giving to software agents some decisions that we take in a real problem. After the process has been completed we have improve the design of collaborative e-business performance under the approach of intellectual capital and knowledge management.

Proceedings Article
26 Oct 2011
TL;DR: A new kind of organizational Content Management System that takes care of both documentary and social sources of knowledge to face the rapid growth of social networks and its adoption by their collaborators.
Abstract: Organizations need to face the rapid growth of social networks and its adoption by their collaborators. While organizations require collaborative tools to get their work done, their collaborators seem to prefer social tools like Facebook or Twitter. We present in this article a new kind of organizational Content Management System that takes care of both documentary and social sources of knowledge.

Proceedings ArticleDOI
26 Oct 2011
TL;DR: Medpeer is a new peer-to-peer (P2P) management system for heterogeneous and distributed data sources that is a Super-Peer system where the super-peers are organized by type of data and contain an ontological structure specific to each type.
Abstract: In this article, we present Medpeer, a new peer-to-peer (P2P) management system for heterogeneous and distributed data sources. Its principal goal is to provide necessary tools for the semantic mediation of data from various types (relational, image, text,..) and for the semantic routing of multimodal queries in an P2P environment. In this environment, each peer will be able to publish the data he wants to share, he is completely autonomous and the data can belong to different models. MedPeer is a Super-Peer system where the super-peers are organized by type of data and contain an ontological structure specific to each type. Each peer exports their data in a common format in the form of a semantically rich ontology in order to contribute to schemas reconciliation. The queries exchanged have a common format in the form of XML documents, and are routed towards the relevant peers thanks to a semantic topology built on top of the existing physical topology

Proceedings Article
01 Jan 2011
TL;DR: A method for formal specification of Russian government services is presented, which can be used for e-government services optimizing, restructuring and checking, design of e-access software, and semi-automatic producing services’ Web-content.
Abstract: Transition to e-government services is a worldwide tendency. However, each country possesses its own specifics, which need to be taken into account. The given study is dedicated to transition towards egovernment services in Russia. A method for formal specification of Russian government services is presented. This method can be used for e-government services optimizing, restructuring and checking, design of e-access software, and semi-automatic producing services’ Web-content. We have adapted OntoGov approach to specify domain ontology. Basing on the mentioned ontology separate services have to be described: process model (customized BPMN notation), document model (based on Feature Diagrams) and description model. An example of a Web-content generation under formal specifications is presented. Pilot method deployment for specification of Russian government services, which required Russian and Finnish citizen to communicate with each other, is described.

Book ChapterDOI
26 Oct 2011
TL;DR: In this article, the authors apply the theory of planned behavior to better understand employees' IB, and also extend TPB by considering the effects of two unexamined yet important organizational factors: external information awareness and proactiveness of innovation strategy (PIS).
Abstract: Due to innovation is highly knowledge intensive, employees’ innovation behavior plays a central role in knowledge creation and distribution in organizations. Therefore, in knowledge management initiatives, it is important to encourage employees’ IB, which involves developing, promoting, judging, distributing and implementing new ideas at work. From a psychological perspective, this study applies the theory of planned behavior (TPB) to better understand employees’ IB, and also extends TPB by considering the effects of two unexamined yet important organizational factors: external information awareness (EIA) and proactiveness of innovation strategy (PIS). Results from a survey of employees in Japanese organizations indicate that EIA and PIS are positively related with employees’ attitude towards innovation, subjective norm about innovation, and perceived behavioral control to innovation, which is, in turn, significantly influence employees’ IB. Employees’ attitude, subjective norm, and perceived behavior control mediate partially the effects of EIA and completely the influence of PIS to employees’ IB. These findings provide directions for more efficient employees’ IB encouragement, by focusing on improving perceived behavior control, EIA and PIS.

Proceedings Article
01 Jan 2011
TL;DR: The role of instructors are highlighted in helping and motivating students to overcome their fear of public speaking as well as to use this avenue for refining their communication skills.
Abstract: Knowledge sharing is a key to effective learning. Students with positive attitude towards knowledge sharing are likely to take this behaviour to their workplace which could help achieve organizational knowledge management goals. The main objective of this study was to explore students’ perceptions of class participation and its benefits, barriers to their participation, and the motivational factors that may improve their knowledge sharing. A pre-tested questionnaire was used for data collection, and 188 post-graduate students from Nanyang Technological University, Singapore participated in it. A majority of the students were aware of the benefits of knowledge sharing in their learning process as it provides an opportunity to listen to and appreciate diverse viewpoints, develop social and communication skills, and learn how to organize and present their ideas. The major barriers to class participation were: low English language proficiency, cultural barriers, shyness, and lack of confidence. This paper also provides some suggestions for improving effectiveness of class participation, particularly in the Asian context. It also highlights the role of instructors in helping and motivating students to overcome their fear of public speaking as well as to use this avenue for refining their communication skills.

Book ChapterDOI
26 Oct 2011
TL;DR: This work proposes fully integration of UML statecharts with behavioral knowledge obtained from novel behavioral ontologies into a Unified Software-Knowledge model, applicable to run time measurements, to check the actual software behavior correctness and efficiency.
Abstract: UML statecharts are a widely accepted standard for modeling software behavior. But, despite the increasing importance of semantics for software behavior, semantics has been treated within UML as mere reasoning add-ons. We propose fully integration of UML statecharts with behavioral knowledge obtained from novel behavioral ontologies into a Unified Software-Knowledge model. These unified models have two important characteristics: first, misbehaviors are explicitly represented; second, behavioral ontologies generate graphs isomorphic to UML statecharts, by construction. This approach is applicable to run time measurements, to check the actual software behavior correctness and efficiency. Measurement discrepancies may trigger knowledge discovery mechanisms to update the unified models. The approach is illustrated with statechart examples from the domain of GOF software design patterns.