scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Universal Computer Science in 2005"


Journal Article
TL;DR: An automatic test data generation technique that uses a genetic algorithm, which is guided by the data flow dependencies in the program, to search for test data to cover its def-use associations, to evaluate the effectiveness of the proposed GA compared to the random testing technique.
Abstract: One of the major difficulties in software testing is the automatic generation of test data that satisfy a given adequacy criterion. This paper presents an automatic test data generation technique that uses a genetic algorithm (GA), which is guided by the data flow dependencies in the program, to search for test data to cover its def-use associations. The GA conducts its search by constructing new test data from previously generated test data that are evaluated as effective test data. The approach can be used in test data generation for programs with/without loops and procedures. The proposed GA accepts as input an instrumented version of the program to be tested, the list of def-use associations to be covered, the number of input variables, and the domain and precision of each input variable. The algorithm produces a set of test cases, the set of def-use associations covered by each test case, and a list of uncovered def- use associations, if any. In the parent selection process, the GA uses one of two methods: the roulette wheel method or a proposed method, called the random selection method, according to the user choice. Finally, the paper presents the results of the experiments that have been carried out to evaluate the effectiveness of the proposed GA compared to the random testing technique, and to compare the proposed random selection method to the roulette wheel method.

115 citations


Journal Article
TL;DR: The Knowledge Modelling and Description Language (KMDL) is developed and can be used to formalise knowledge- intensive processes with a focus on certain knowledge-specific characteristics and to identify process improvements in these processes.
Abstract: Existing approaches in the area of knowledge-intensive processes focus on integrated knowledge and process management systems, the support of processes with KM systems, or the analysis of knowledge-intensive activities. For capturing knowledge-intensive business processes well known and established methods do not meet the requirements of a comprehensive and integrated approach of process-oriented knowledge management. These approaches are not able to visualise the decisions, actions and measures which are causing the sequence of the processes in an adequate manner. Parallel to conventional processes knowledge-intensive processes exist. These processes are based on conversions of knowledge within these processes. To fill these gaps in modelling knowledge-intensive business processes the Knowledge Modelling and Description Language (KMDL) got developed. The KMDL is able to represent the development, use, offer and demand of knowledge along business processes. Further it is possible to show the existing knowledge conversions which take place additionally to the normal business processes. The KMDL can be used to formalise knowledge- intensive processes with a focus on certain knowledge-specific characteristics and to identify process improvements in these processes. The KMDL modelling tool K-Modeler is introduced for a computer-aided modelling and analysing. The technical framework and the most important functionalities to support the analysis of the captured processes are introduced in the following contribution.

79 citations



Journal Article
TL;DR: A novel high bit rate LSB audio watermark- ing method that reduces embedding distortion of the host audio by using the proposed two-step algorithm, watermark bits are embedded into higher LSB layers, resulting in increased robustness against noise addition.
Abstract: In this paper, we present a novel high bit rate LSB audio watermark- ing method that reduces embedding distortion of the host audio. Using the proposed two-step algorithm, watermark bits are embedded into higher LSB layers, resulting in increased robustness against noise addition. In addition, listening tests showed that perceptual quality of watermarked audio is higher in the case of the proposed method than in the standard LSB method.

73 citations


Journal Article
TL;DR: The main novelties of the implementation of Lua 5.0 are discussed: its register- based virtual machine, the new algorithm for optimizing tables used as arrays, the Implementation of closures, and the addition of coroutines.
Abstract: We discuss the main novelties of the implementation of Lua 5.0: its register- based virtual machine, the new algorithm for optimizing tables used as arrays, the implementation of closures, and the addition of coroutines.

72 citations


Journal Article
TL;DR: A new viewpoint in knowledge management is introduced by introducing KM-Services as a basic concept for Knowledge Management by discussing the vision of serviceoriented knowledge management (KM) as a realisation approach of process oriented knowledge management.
Abstract: This paper introduces a new viewpoint in knowledge management by introducing KM-Services as a basic concept for Knowledge Management. This text discusses the vision of service oriented knowledge management (KM) as a realisation approach of process oriented knowledge management. In the following process oriented knowledge management as it was defined in the EU-project PROMOTE (IST-1999-11658) is presented and the KM-Service approach to realise process oriented knowledge management is explained. The last part is concerned with an implementation scenario that uses Web-technology to realise a service framework for a KM-system.

69 citations


Journal Article
TL;DR: The Tube Map Visualization is a powerful metaphor to communicate a complex project to different target groups and to build up a mutual story to complement traditional project plans of long-term projects where different stakeholders are involved.
Abstract: This article introduces two theoretical concepts for the emerging field Knowledge Visualization and discusses a new visualization application that was used to communicate a long-term project to various stakeholders in an organization. First, we introduce a theoretical framework and a model for Knowledge Visualization. The framework and the model identify and relate the key-aspects for successful Knowledge Visualization applications. Next, we present an evaluation of an implemented Knowledge Visualization application: The Tube Map Visualization. A quality development process had to be established in an education centre for health care professions. Traditional project plans, flyers, and mails did not manage to get the attention, did present overview and detail insufficiently, and did not motivate the employees for actions. Because Visual Metaphors are effective for Knowledge Communication we developed a customized Knowledge Map based on the tube system metaphor. The Tube Map Visualization illustrates the whole project, where each tube line represents a target group and each station a milestone. The visualization was printed as a poster and displayed at prominent locations in the organization. The findings of an evaluation indicate that the Tube Map Visualization is a powerful metaphor to communicate a complex project to different target groups and to build up a mutual story. The employees considered it useful because it provides overview and detailed information in one image and because it initiates discussion. The Tube Map Visualization is therefore helpful to complement traditional project plans of (1) long-term projects where (2) different stakeholders are involved. The theoretical framework, the model, and the findings have implications for researchers in the fields of Knowledge Management, Knowledge Visualization, Information Visualization, and Communication Sciences.

62 citations


Journal Article
TL;DR: A novel sub group mining approach for explorative and descriptive data mining implemented in the VIKAMINE system is presented and several integrated visualization methods to support subgroup mining are proposed.
Abstract: Visual mining methods enable the direct integration of the user to overcome major problems of automatic data mining methods, e.g., the presentation of uninteresting results, lack of acceptance of the discovered findings, or limited confidence in these. We present a novel subgroup mining approach for explorative and descriptive data mining implemented in the VIKAMINE system. We propose several integrated visualization methods to support subgroup mining. Fur- thermore, we describe three case studies using data from fielded systems in the medical domain.

56 citations


Journal Article
TL;DR: The StAC language, which can be used to specify the orchestra of activities in long running business transactions, is described and a substantial subset of BPEL can be mapped to StAC thus demonstrating the expressiveness of StAC and providing a formal semantics for BPEL.
Abstract: We describe the StAC language which can be used to specify the orchestra- tion of activities in long running business transactions. Long running business transac- tions use compensation to cope with exceptions. StAC supports sequential and parallel behaviour as well as exception and compensation handling. We also show how the B notation may be combined with StAC to specify the data aspects of transactions. The combination of StAC and B provides a rich formal notation which allows for succinct and precise specification of business transactions. BPEL is an industry standard lan- guage for specifying business transactions and includes compensation constructs. We show how a substantial subset of BPEL can be mapped to StAC thus demonstrating the expressiveness of StAC and providing a formal semantics for BPEL.

53 citations


Journal Article
TL;DR: An approach to the classroom context by identification process using RFID technology, as an implicit input to the system is presented, because the only requirement for the user is to carry a device, identifying and obtaining context services.
Abstract: In recent years, there have been many efforts at research towards obtaining the simple and natural use of computers, with interfaces closer to the user. New visions such as that of the Ubiquitous Computing paradigm emerge. In Ubiquitous Computing the computer is distributed in a series of devices with reduced functionality, spread over the user's environment and communicating wirelessly. With these, context-aware applications are obtained. In this paper we present an approach to the classroom context by identification process using RFID technology, as an implicit input to the system. The main goal is to acquire natural interaction, because the only requirement for the user (teacher or student) is to carry a device (smart label), identifying and obtaining context services. Some of these services and the mechanisms that make them available are described here, together with a scenario of their use in the classroom.

46 citations



Journal ArticleDOI
TL;DR: A classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data.
Abstract: Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another.

Journal Article
TL;DR: A user-support mechanism based on the sharing of knowledge with other users through the collaborative Web browsing, focusing specifically on the user's interests extracted from his or her own bookmarks is proposed.
Abstract: With the exponentially increasing amount of information available on the World Wide Web, users have been getting more difficult to seek relevant information. Several studies have been conducted on the concept of adaptive approaches, in which the user's personal interests are taken into account. In this paper, we propose a user-support mechanism based on the sharing of knowledge with other users through the collaborative Web browsing, focusing specifically on the user's interests extracted from his or her own bookmarks. Simple URL based bookmarks are endowed with semantic and structural information through the conceptualization based on ontology. In order to deal with the dynamic usage of bookmarks, ontology learning based on a hierarchical clustering method can be exploited. This system is composed of a facilitator agent and multiple personal agents. In experiments conducted with this system, it was found that approximately 53.1% of the total time was saved during collaborative browsing for the purpose of seeking the equivalent set of information, as compared with normal personal Web browsing.

Journal Article
TL;DR: A fuzzy technique to characterize a policy and to define a Refer- ence Evaluation Model representing different security levels against which to evaluate and compare policies are proposed.
Abstract: In a world made of interconnected systems which manage huge amounts of confidential and shared data, security plays a significant role. Policies are the means by which security rules are defined and enforced. The ability to evaluate policies is becoming more and more relevant, especially when referred to the cooperation of ser- vices belonging to un-trusted domains. We have focused our attention on Public Key Infrastructures (PKIs); at the state of the art security policies evaluation is manually performed by technical and organizational people coming from the domains that need to interoperate. However, policy evaluation must face uncertainties derived from dif- ferent perspectives, verbal judgments and lack of information. Fuzzy techniques and uncertainty reasoning can provide a meaningful way for dealing with these issues. In this paper we propose a fuzzy technique to characterize a policy and to define a Refer- ence Evaluation Model representing different security levels against which we are able to evaluate and compare policies. The comparison takes into account not only mini- mal system needs but evaluator's severity, too; furthermore it gives clear information regarding policy weakness that could be used to help security administrators to better enforce rules. Finally we present a case study which evaluates the security level of a "legally recognized" policy.


Journal Article
TL;DR: This paper introduces a variant of constraint automata with discrete probabilities and nondeterminism, called probabilistic constraint Automata, which can serve for composi- tional reasoning about connector components modelled by Reo circuits with unreliable channels.
Abstract: Constraint automata have been used as an operational model for Reo which offers a channel-based framework to compose complex component connectors. In this paper, we introduce a variant of constraint automata with discrete probabilities and nondeterminism, called probabilistic constraint automata. These can serve for composi- tional reasoning about connector components, modelled by Reo circuits with unreliable channels, e.g., that might lose or corrupt messages, or channels with random output values that, e.g., can be helpful to model randomized coordination principles. Coordination models and languages provide a formalization of the glue-code that binds individual components and organizes the communication and cooperation between them. In the past 15 years, various types of coordination models have been developed, including techniques for the design and the analysis of such models. They all have in common that they yield a clear separation between the internal structure of the components and their relationship that arises through the organization of their interactions.

Journal Article
TL;DR: This paper presents a front-end tool for translating ReBeca to the languages of existing model checkers in order to model check Rebeca models and demonstrates automated modular verification and abstraction techniques supported by the tool.
Abstract: Actor-based modeling, with encapsulated active objects which communi- cate asynchronously, is generally recognized to be well-suited for representing concur- rent and distributed systems. In this paper we discuss the actor-based language Rebeca which is based on a formal operational interpretation of the actor model. Its Java-like syntax and object-based style of modeling makes it easy to use for software engineers, and its independent objects as units of concurrency leads to natural abstraction tech- niques necessary for model checking. We present a front-end tool for translating Rebeca to the languages of existing model checkers in order to model check Rebeca models. Automated modular verification and abstraction techniques are supported by the tool.

Journal Article
TL;DR: An approach meant to enhance the content of the experience base and improve learning from experiences within information spaces, namely weblogs that are maintained during daily work and serve as input for both an experiencebase and for an information element base is introduced.
Abstract: There are various Knowledge Management Systems available currently and designed to support knowledge sharing and learning. An example of these are "Experience- based Information Systems" in the domain of Software Engineering, i.e., Information Systems designed to support experience management. Lately, these have become more and more sophisticated from a technical point of view. However, there are several shortcomings that appear to limit the input, the content of these systems and their usage. The problems identified in this paper relate to knowledge acquisition, learning issues, as well as to the users' motivation and trust. We introduce an approach meant to enhance the content of the experience base and improve learning from experiences within information spaces, namely weblogs that are maintained during daily work and serve as input for both an experience base and for an information element base. In order to enhance learning, a pedagogical information agent is envisaged for retrieving suitable experiences to be further enriched with additional information elements and produce micro-didactical learning arrangements. In addition we consider the relevance of motivation and trust issues. An empirical study demonstrates that using weblogs for such an approach is feasible.

Journal ArticleDOI
TL;DR: This article presents a novel adaptable approach that can cope with the challenging inherent features of data streams and shows the results for AOG based clustering in a resource constrained environment.
Abstract: Mining data streams has raised a number of research challenges for the data mining community. These challenges include the limitations of computational resources, especially because mining streams of data most likely be done on a mobile device with limited resources. Also due to the continuality of data streams, the algorithm should have only one pass or less over the incoming data records. In this article, our Algorithm Output Granularity (AOG) approach in mining data streams is discussed. AOG is a novel adaptable approach that can cope with the challenging inherent features of data streams. We also show the results for AOG based clustering in a resource constrained environment.

Journal Article
TL;DR: The use of semantic metadata for Learning Object (LO) contextualization is proposed in order to adapt instruction to the learner's cognitive requirements in three different ways: background knowledge, knowledge objectives and the most suitable learning style.
Abstract: Despite the increasing importance gained by e-learning standards in the past few years, and the unquestionable goals reached (mainly regarding interoperability among e- learning contents) current e-learning standards are yet not sufficiently aware of the context of the learner. This means that only a limited support for adaptation regarding individual characteristics is currently being provided. In this article, we propose the use of semantic metadata for Learning Object (LO) contextualization in order to adapt instruction to the learner's cognitive requirements in three different ways: background knowledge, knowledge objectives and the most suitable learning style. In our pilot e-learning platform ( ) the context for LOs is addressed in two different ways: knowledge domain and instructional design. We propose the use of ontologies as the knowledge representation mechanism to allow the delivery of learning material that is relevant to the current situation of the learner.

Journal Article
TL;DR: An original approach to modelling internal structure of artificial cognitive agents and the phenomenon of language grounding is presented and the importance of non-conscious embodied knowledge in language grounding and production is accepted.
Abstract: An original approach to modelling internal structure of artificial cognitive agents and the phenomenon of language grounding is presented. The accepted model for the internal cognitive space reflects basic structural properties of human cognition and assumes the partition of cognitive phenomena into conscious and 'non-conscious'. The language is treated as a set of semiotic symbols and is used in semantic communication. Semiotic symbols are related to the internal content of empirical knowledge bases in which they are grounded. This relation is given by the so-called epistemic satisfaction relations defining situations in which semiotic symbols are adequate (grounded) representations of embodied experience. The importance of non-conscious embodied knowledge in language grounding and production is accepted. An example of application of the proposed approach to the analysis of grounding requirements is given for the case of logic equivalences extended with modal operators of possibility, belief and knowledge. Implementation issues are considered.

Journal Article
TL;DR: This paper presents and presents the extension of Andrews' curves, in which a moving three-dimensional image is created in which the authors can see clouds of data points moving as they move along the curves.
Abstract: Computers are still much less useful than the ability of the human eye for pattern matching. This ability can be used quite straightforwardly to identify structure in a data set when it is two or three dimensional. With data sets with more than 3 dimensions some kind of transformation is always necessary. In this paper we review in depth and present and extension of one of these mechanisms: Andrews' curves. With the Andrews' curves we use a curve to represent each data point. A human can run his eye along a set of curves (representing the members of the data set) and identify particular regions of the curves which are optimal for identifying clusters in the data set. Of interest in this context, is our extension in which a moving three-dimensional image is created in which we can see clouds of data points moving as we move along the curves; in a very real sense, the data which dance together are members of the same cluster.

Journal Article
TL;DR: The concept of knowledge stance is discussed in order to relate functions from process models to actions from activity theory, thus detailing the context relevant for knowledge work.
Abstract: During the last years, a large number of information and communication technologies (ICT) have been proposed to be supportive of knowledge management (KM). Several KM instruments have been developed and implemented in many organizations that require support by ICT. Recently, many of these technologies are bundled in the form of comprehensive, enterprise-wide knowledge infrastructures. The implementation of both, instruments and infrastructures, requires adequate modeling techniques that consider the specifics of modeling context in knowledge work. The paper studies knowledge work, KM instruments and knowledge infrastructures. Modeling techniques are reviewed, especially for business process management and activity theory. The concept of knowledge stance is discussed in order to relate functions from process models to actions from activity theory, thus detailing the context relevant for knowledge work.

Journal Article
TL;DR: A hybrid architecture for reconciliation of knowledge management and workflow management systems is proposed in order to support process participants in organizations, who are increasingly distributed and need to share and distribute knowledge artifacts.
Abstract: Current trends in collaborative knowledge management emphasize the importance of inter- and intra-organizational business process support. Enactment of business processes has primarily been a domain of workflow management systems. In this paper we propose a hybrid architecture for reconciliation of knowledge management and workflow management systems in order to support process participants in organizations, who are increasingly distributed and need to share and distribute knowledge artifacts. Today one pressing challenge is to utilize software as to create, share, and exchange (knowledge) work in collaborative knowledge activities across locations, while still being business process aware. This paper develops a conceptual framework, discusses a software architecture, and presents examples of a software system implementation for activity-based knowledge management for global project teams.

Journal Article
TL;DR: A novel spring-based interactive Information Visualization method for analysing psychotherapeutic data more in-depth and is able to find new predictors for a positive or negative course of the therapy due to the combination of various visualization and interaction methods.
Abstract: Tracking and comparing psychotherapeutic data derived from question- naires involves a number of highly structured, time-oriented parameters. Descriptive and other statistical methods are only suited for partial analysis. Therefore, we created a novel spring-based interactive Information Visualization method for analysing these data more in-depth. With our method the user is able to find new predictors for a pos- itive or negative course of the therapy due to the combination of various visualization and interaction methods.

Journal Article
TL;DR: This paper contains a completely formal (and mechanically proved) development of some algorithms dealing with a linked list supposed to be shared by various processes.
Abstract: This paper contains a completely formal (and mechanically proved) development of some algorithms dealing with a linked list supposed to be shared by various processes. These algorithms are executed in a highly concurrent fashion by an unknown number of such indepen- dent processes. These algorithms have been first presented in (MS96) by M.M. Michael and M.L. Scott. Two other developments of the same algorithms have been proposed recently in (YS03) (using the 3VMC Model Checker developed by E. Yahav) and in (DGLM04) (using I/O Au- tomata and PVS).

Journal ArticleDOI
TL;DR: A constructive proof of the Stone-Yosida representation theorem for Riesz spaces motivated by considerations from formal topology is presented and this theorem implies the Gelfand represented theorem for C*-algebras of operators on Hilbert spaces as formulated by Bishop and Bridges.
Abstract: We present a constructive proof of the Stone-Yosida representation theorem for Riesz spaces motivated by considerations from formal topology. This theorem is used to derive a representation theorem for f-algebras. In turn, this theorem implies the Gelfand representation theorem for C*-algebras of operators on Hilbert spaces as formulated by Bishop and Bridges. Our proof is shorter, clearer, and we avoid the use of approximate eigenvalues.

Journal Article
TL;DR: This paper describes several educational computer tools used successfully to support Programming learning and presents a global environment which integrates them, allowing a broader approach to Programming teaching and learning.
Abstract: Computer Programming learning is a difficult process. Experience has demonstrated that many students find it difficult to use programming languages to write programs that solve problems. In this paper we describe several educational computer tools used successfully to support Programming learning and we present a global environment which integrates them, allowing a broader approach to Programming teaching and learning. This environment uses program animation and the Computer-Supported Collaborative Learning (CSCL) paradigm.

Journal Article
TL;DR: This paper proposes two logical structures for representing inconsistent knowledge: conjunction and disjunction and defines the semantics and formulate the consensus problem, the solution of which would resolve the inconsistency.
Abstract: Inconsistency of knowledge may appear in many situations, especially in distributed environments in which autonomous programs operate. Inconsistency may lead to conflicts, for which the resolution is necessary for correct functioning of an intelligent system. Inconsistency of knowledge in general means a situation in which some autonomous programs (like agents) generate different versions (or states) of knowledge on the same subject referring to a real world. In this paper we propose two logical structures for representing inconsistent knowledge: conjunction and disjunction. For each of them we define the semantics and formulate the consensus problem, the solution of which would resolve the inconsistency. Next, we work out algorithms for consensus determination. Consensus methodology has been proved to be useful in solving conflicts and should be also effective for knowledge inconsistency resolution.

Journal Article
TL;DR: Wang et al. as discussed by the authors exploit semantic technologies to infer the relationships among Web requests and propose the hybrid approach of this method, as combining with the existing heuristics, which can track the behavior of users tending to easily change their intentions and interests.
Abstract: Efficient data preparation needs to discover the underlying knowledge from complicated Web usage data. In this paper, we have focused on two main tasks, seman- tic outlier detection from online Web request streams and segmentation (or session- ization) of them. We thereby exploit semantic technologies to infer the relationships among Web requests. Web ontologies such as taxonomies and directories can label each Web request as all the corresponding hierarchical topic paths. Our algorithm consists of two steps. The first step is the nested repetition of top-down partitioning for es- tablishing a set of candidates of session boundaries, and the next step is evaluation process of bottom-up merging for reconstructing segmented sequences. In addition, we propose the hybrid approach of this method, as combining with the existing heuristics. Using synthesized dataset and real-world dataset of the access log files of IRCache ,w e conducted experiments and showed that semantic preprocessing method improves the performance of rule discovery algorithms. It means that we can conceptually track the behavior of users tending to easily change their intentions and interests, or simultane- ously try to search various kinds of information on the Web.