scispace - formally typeset
Search or ask a question

Showing papers in "Software Engineering Research and Practice in 2006"


Journal Article
TL;DR: This paper generates an effort estimation model after correlation analysis for determining the relationship between effort and UML Points, and proposes an automatic software metrics analysis tool and a methodology for early stage effort estimation for software systems.
Abstract: UML-based object-oriented metrics are fully capable of software measurement. Many researchers have produced effort estimation models for software systems. The estimation effort in the early stages of software development is one of the most important problems faced by software developers and managers. UML related information can be used as an accurate source for effort estimation. In this paper, we propose an automatic software metrics analysis tool and a methodology for early stage effort estimation for software systems. Using this method, the developer/manager can analyze a software system with function point-like analysis. UML Points is a new concept, combining Use Case Points and Class Points with our own definitions to provide software system size information. Based on UML Points, we generate an effort estimation model after correlation analysis for determining the relationship between effort and UML Points.

32 citations


Journal Article
TL;DR: This paper presents a novel, general-purpose OSR mechanism that is more amenable to optimization than prior approaches and improves code quality over the extant, state-of-the-art, resulting in performance gains.
Abstract: Efficient invalidation and dynamic replacement of executing code – on-stack replacement (OSR), is necessary to facilitate effective, aggressive, specialization of object-oriented programs that are dynamically loaded, incrementally compiled, and garbage collected. Extant OSR mechanisms restrict the performance potential of program specialization since their implementations are special-purpose and restrict compiler optimization. In this paper, we present a novel, general-purpose OSR mechanism that is more amenable to optimization than prior approaches. In particular, we decouple the OSR implementation from the optimization process and update the program state information incrementally during optimization. Our OSR implementation efficiently enables the use of code specializations that are invalidated by any event – including those external to program code execution. We improve code quality over the extant, state-of-the-art, resulting in performance gains of 1-31%, and 9% on average.

31 citations


Journal Article
TL;DR: Entropy scores showed the collection of OO classes requiring changes betweenSoftware versions had a higher composite entropy score than those classes that did not undergo changes between software versions, indicating that possibly decision tree analysis may be more effective in analyzing software degradation.
Abstract: Abstract –The term ‘software entropy’ has been anecdotally defined to mean that software declines in quality, maintainability and understandability though its lifetime. While there are numerous software metrics that assess “snapshots” of software maintainability, few assess software degradation at multiple, discrete points in the life cycle. Assessing object-oriented (OO) software degradation is more art than science. Recently studies have shown that OO software degradation may be assessed by measuring the increase in the number of “links”, or coupling, within an abstraction model and between abstraction models f the software. We believe that software degradation may also be measured using cyclomatic complexity since it has been shown to be highly correlated with faultproneness of OO classes. We take the approach of defining software decay in terms of Shannon entropy and McCabe cyclomatic complexity using industry-established complexity threshold criteria. We use the Rosenberg WMC risk threshold criteria and the McCabe risk interpretation threshold criteria in our experiment. We applied this metric retrospectively to Mozilla Rhino, an open-source implementation of JavaScript written in Java. Our initial findings were inconclusive since the number of software revisions was limited. However, we conducted further analyses and showed that components with high cyclomatic complexities were associated with more maintenance activities than those components with lower cyclomatic complexities. Entropy scores showed the collection of OO classes requiring changes between software versions had a higher composite entropy score than those classes that did not undergo changes between software versions. Additionally, a pattern of repeated component modification was detected in our secondary analysis, indicating that possibly decision tree analysis may be more effective in analyzing software degradation.

28 citations


Journal Article
TL;DR: Several key risks that must be addressed when dealing with late changes to requirements are identified and a discussion of techniques for handling are provided.
Abstract: One certainty in software development is that all projects will have to deal with change. Being able to effectively handle proposed changes is crucial for allowing continued development of a software project to occur. In order to effectively manage the changes and their effects, developers must first assess the risks involved in making the change. To understand the risks, the project manager must determine how the change will affect not only the source code, but also the entire project. These risks may affect a project’s schedule, budget, and quality factors. Risk assessment will help to determine if the desired change can be implemented into the system. This paper identifies risks associated with late changes to the software requirements. Late changes are those changes that occur after one cycle of the development process has been completed and a working version of the system exists. It is important to understand late changes, because late changes to requirements often result in the most cost to an ongoing development project both in time and money. In this paper we identify several key risks that must be addressed when dealing with late changes to requirements. Then we provide a discussion of techniques for handling

26 citations


Journal Article
TL;DR: In this article, two major and strongly related techniques are identified and discussed: Test case modeling and an evolutionary approach to model transformation, which is a follow up on [13] and [14].
Abstract: Model driven architecture (MDA) concentrates on the use of models during software development. An approach using models as the central development artifact is more abstract, more compact and thus more effective and probably also less error prone. Although the ideas of MDA exist already for years, there is still much to improve in the development process as well as the underlying techniques and tools. Therefore, this paper is a follow up on [13], reexamining und updating the statements made there. Here two major and strongly related techniques are identified and discussed: Test case modeling and an evolutionary approach to model transformation.

26 citations


Journal Article
TL;DR: A model of maintainability would have a substantial long-term impact within research, industry, standardisation authorities and education and is advocated to the software community to gather its resources to develop the model.
Abstract: Lack of a commonly defined maintainability model hinders us from evaluating and certifying products with respect to maintainability. We cannot compare different products within and across organisations. We have difficulties to evolve and maintain software. We cannot try new development paradigms and evaluate their effects on product maintainability. We have no commonly defined maintainability model to base our software engineering research. In this paper, we outline a suggestion for a model of maintainability. We advocate to the software community to gather its resources in order to develop the model. Such a model would have a substantial long-term impact within research, industry, standardisation authorities and education. This paper lists ideas for a maintainability model with the aim to provide an initial milestone for ample discussion within the software engineering community.

21 citations


Journal Article
TL;DR: An experimental study over two groups of students comprising of undergraduate students (seniors) who develop software using the conventional way of performing unit testing after development and also by extracting test cases before implementation as in Agile Programming showed that the software had less number of faults when developed using Agile programming.
Abstract: In this paper, we conduct an experimental study over two groups of students comprising of undergraduate students (seniors) who develop software using the conventional way of performing unit testing after development and also by extracting test cases before implementation as in Agile Programming. Both groups developed the same software using an incremental and iterative approach. The results showed that the software had less number of faults when developed using Agile Programming. Also, the quality of software was better and the productivity increased. Keywords: Test driven development, agile programming, case study 1. Introduction Test-Driven Development (TDD) is a technique that involves writing test cases first and then implementing the code necessary in order to pass the tests. The goal is to achieve immediate input and thereby construct a program. This technique is heavily emphasized in Agile or Extreme Programming [1, 2, 3]. This process of designing test-cases prior to the implementation is termed as “Test-First” approach. We consider unit-testing only (by the programmers) and have nothing to do with integration or acceptance testing. But we do take into account the number of faults found while SQA performs formal unit-testing, integrating and performing acceptance testing in order to measure the quality of the software produced. The first step in this approach is to quickly add a test, basically just enough code to fail. Then we run our tests, generally all of them but in order to finish the process quickly, may be only subsets of tests are run to make sure that the new test does fail. We then update the code in order to pass the new tests. Now, we again run our tests. If they fail, we again have to update and retest, else we add the next functionality. There are no particular rules to form the test-cases but more tests are added throughout the implementation. Though, refactoring should be performed in agile programming that is programmers alternate between adding new tests and functionalities to improve its consistency. It is done to improve the readability of code or change of design or removal of unwanted code. There are various advantages in employing TDD. Programmers tend to know immediately whether the new feature has been added in accordance with the specifications. The process in performed in steps comprising of small parts and hence easier to manage. Low number of faults are tend to be found during acceptance testing and maintenance can be viewed as another increment or addition of feature which would make it easier. There is no particular design phase and software is built through the process of refactoring. In short, TDD improves programmer productivity and software quality. There have been a number of studies [4, 5, 6, 7] that have been performed to test the effectiveness of TDD and the results give mixed opinions. We perform an experiment with 2 groups of students, one developing software the conventional way of testing it after implementation and the other group through TDD. In both groups, test cases were developed by programmers and regression testing was performed. Only difference is test cases are written prior to implementation and are tested throughout the production in TDD and test cases are written and tested after implementation in the conventional way. Each group consisted of 9 undergraduate students and time period for the whole study was 3 months. We investigate in this paper through experimental studies the promise of “Test-First” strategy emphasized in agile programming.

20 citations


Journal Article
TL;DR: This approach gives out a formal method to validate whether consistencies between product assets are coincident with those between requirements based on mapping rules to decrease inconsistencies between assets and increase reuse in a product line.
Abstract: Domain requirements dependencies have very strong influence on all development processes of member products in a software product line(SPL), especially product line architecture. There are some feature oriented approaches to managing requirements dependencies in software product lines. However, few of them deal with mapping from requirements to product line architecture. This paper presents an approach to analyzing domain requirements dependencies’ influence on product line architecture. Not only a feature dependencies classification is defined, but also mapping rules from requirements to features and mapping rules from features to architecture are developed to decrease inconsistencies between assets and increase reuse in a product line. This approach gives out a formal method to validate whether consistencies between product assets are coincident with those between requirements based on mapping rules. A case study for spot and futures transaction domain is described to illustrate this approach.

19 citations



Journal Article
TL;DR: This study forms a distributed team to work on a software project, in which, wiki is used as the major communication and coordination tool.
Abstract: Software Development as a global enterprise is current a reality for many large corporations and it is one of the rapidly growing trends in the software industry. Because in global software development, developers are located in different cities, different countries, or different hemisphere, communication and coordination is the major concern in the management of distributed teams. The traditional communication mechanisms such as face-to-face meeting is no longer appropriate for distributed software development. Therefore, there is a demand for new communication and coordination tools that can support the distributed development. Wiki is invented as a tool for writing in the Web. It contains freely expandable collections of interlinked pages which are easily editable by any users with a Web browser. In this study, we form a distributed team to work on a software project, in which, wiki is used as the major communication and coordination tool. The goal of this study is to understand how wiki can facilitate communication, coordination, and documentation in distributed software development.

19 citations


Journal Article
TL;DR: This paper explores the possibilities and potential impact podcasts will have in educating students in the 21 century and proposes a strategy to address this need.
Abstract: We are well into the twenty-first century, and our technology is advancing at blazing speeds. Yet our education methodologies suffer the fate of remaining largely antiquated. Students and educators alike have become more sophisticated; the tools of learning must reflect and compensate for this intellectual advancement. Podcasts have greatly contributed towards that need. Podcasts are simple, effective, dynamic tools that will change the way that students and educators interact in the classroom and in cyberspace. This paper explores the possibilities and potential impact podcasts will have in educating students in the 21 century.

Journal Article
TL;DR: CATIA as discussed by the authors is a software traceability approach that integrates the high level with the low level software models of object-oriented software that include the requirements, test cases, design and code.
Abstract: Software traceability and its subsequent impact analysis help relate the consequences or ripple-effects of a proposed change across different levels of software system. Our software traceability approach can be observed at its ability to integrate the high level with the low level software models of object-oriented software that include the requirements, test cases, design and code. It supports the top down and bottom up traceability in response to tracing for the potential effects. The objective of this paper is to present our validation experiment on a case study of software embedded system. It determines the effectiveness of our approach via a prototype tool, called CATIA. The results reveal that the nature of the components at different traceability levels affect various aspects of effectiveness metrics.

Journal Article
TL;DR: This paper has presented a conceptual framework illustrating the relationships between the different predictor variables and agile software development success, and presented a consolidated picture of the different predictors of agileSoftware development success.
Abstract: Agile software development methodologies have recently gained widespread popularity. The Agile Manifesto states valuing “individuals, and interactions over processes, and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan” (Fowler, 2002). However, little is known about how effective and efficient agile practices are over the traditional methodologies, and what their success factors are. There have been several disparate anecdotal evidences about the success of software development projects using agile methodologies. In this paper we provide a consolidated picture of the different predictors of agile software development success. This is intended to be a review paper. We have also presented a conceptual framework illustrating the relationships between the different predictor variables and agile software development success.

Journal Article
TL;DR: The goal is to define updating styles, to represent and reuse update expertises, and to build an environment based on an updating style library in order to assist architects in modifying their component-based architectures.
Abstract: Our paper deals with the update of component-based software architectures, i.e. all the modifications which can be performed on the architecture's elements to satisfy various requirements. Current researches on the component-based software engineering propose various mechanisms to adapt, evolve, customize, or reconfigure such architectures. We are convince that these mechanisms, even powerful, still require the architects' expertises. Moreover, it is noticed that these expertises conform to a style, i.e. a way of update which privileges certain concerns rather than others. It is what explains why a solution is selected instead of another, for the same main update issue. Our goal is to define updating styles, to represent and reuse update expertises. We wish to use this work to build an environment based on an updating style library in order to assist architects in modifying their component-based architectures.

Journal Article
TL;DR: It is promising that part of UML can be formalized and transmitted to Alloy to allow automatic model validation analyzing in order to reduce errors in the requirement and design stages.
Abstract: Alloy is a new modeling language for software design, while Unified Modeling Language (UML) is a standard modeling language widely used in industry. This paper analyzes the similarities and the differences between Alloy and UML. It focuses on the complexity differences, accuracy differences, and the expression differences between these two languages. Both Alloy and UML can be used to specify the requirements to design complex software systems. The syntax of Alloy is largely compatible with UML. UML is more complicated, while Alloy is more concise. UML is more ambiguous, while Alloy is more accurate. UML is more expressive, while Alloy is more abstract. It is promising that part of UML can be formalized and transmitted to Alloy to allow automatic model validation analyzing in order to reduce errors in the requirement and design stages.

Journal Article
TL;DR: This work proposes to model such collaborations as distributed services from a centralised as well as a distributed perspective to separate business logic from implementation and to evaluate whether the distributed perspective is a consistent and correct implementation of the global perspective.
Abstract: Business process integration across enterprise boundaries is a complex task. Personnel from different enterprises lacks a common understanding of domain-specific terms or the essentials of a business process. Modeling support using not too complex descriptive formalisms is crucial for communication purposes when combined with suitable abstraction techniques for managing complexity. System validation requires a precise formal semantics for extending ad hoc analysis with formal methods and for directly translating and executing the provided models. We propose to model such collaborations as distributed services from a centralised as well as a distributed perspective to separate business logic from implementation. The centralised perspective is modeled with UML activity diagrams which are validated with model checking techniques. The distributed perspective is implemented using Webservice technology like WS-BPEL and WSDL but is also abstracted into a form suitable for model checking to evaluate whether the distributed perspective is a consistent and correct implementation of the global perspective.

Journal Article
TL;DR: The present paper investigates, why D-ART and RRT behave better for lower failure rates, and the F-measure distribution and the spatial distribution of single test cases are analyzed.
Abstract: Adaptive Random Testing (ART) denotes a family of random testing methods that are designed to be more effective than Random Testing (RT) Mostly, these methods have been investigated using the mean F-measure, which denotes the random number of test cases necessary to detect the first failure The two most important ART methods, namely Distance-Based ART (DART) and Restricted Random Testing (RRT), perform worse for higher failure rates than for lower failure rates Furthermore, all previous publications on ART analyzed these methods for testing with unlimited resources The present paper investigates, why DART and RRT behave better for lower failure rates Therefore, the F-measure distribution and the spatial distribution of single test cases are analyzed Thereby, shortcomings of D-ART and RRT are revealed Improved ART methods are presented based on our findings Furthermore, the usefulness of the F-measure distribution of testing with unlimited resources for resourceconstrained testing is explained Finally, the ART methods are compared to RT for both cases, i e with and without resource limitations

Journal Article
TL;DR: The purpose was to create an approach that could be used to analyze relations between changes and defects, and found that developers which had made fewer changes were more likely to do changes that were initiated by defect.
Abstract: In this paper we study defect management and version management systems that are used in many Open Source software projects to manage development and maintenance processes. Usually, those systems are not integrated together even there is a conceptual relation between stored information. Our purpose was to create an approach that could be used to analyze relations between changes and defects. To evaluate our approach we analyzed two case studies, Apache HTTP Server and Mozilla Firefox that are well-know examples of Open Source software. Consequently, we found that only a small percentage of changes in the source code of the Apache were initiated by defect reports. However, in Mozilla over 60 percent of changes were initiated by defect reports. Furthermore, we found that developers which had made fewer changes were more likely to do changes that were initiated by defect.

Journal Article
TL;DR: Inspection of Concurrent Systems: Combining Tables, Theorem Proving and Model Checking, Author: Vera Pantelic, Location: Thode.
Abstract: Title: Inspection of Concurrent Systems: Combining Tables, Theorem Proving and Model Checking, Author: Vera Pantelic, Location: Thode

Journal Article
TL;DR: This work describes an approach to such an evolutionary system ns being n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology, an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.
Abstract: We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.

Journal Article
TL;DR: This paper addresses the issue of most appropriate component retrieval from a component repository with Genetic Algorithm based second step and finds that initial population will be precise as compared to random, which will help in early convergence.
Abstract: Software reuse has become very popular in the software development because of its immense advantages, which results in improving software product quality, decreased product cost and schedule. Reusable components stored in a repository are useful in developing early prototypes with better quality. One of the most fundamental problems in software reusability is locating and retrieving software components from a large repository. To reuse a software component, you first have to find it. Retrieval of component should be less time consuming and efficient. This paper addresses the issue of most appropriate component retrieval from a component repository. Appropriateness here is precision and quality. Two-step process is used for retrieval of appropriate component from repository. The initial step is Keyword based retrieval for finding all candidate components according to the user requirement. Keyword based search is used to narrow down the search space, which further improves initial population for Genetic Algorithm based second step. As Genetic Algorithms are applied to only extracted candidate components, initial population will be precise as compared to random, which will help in early convergence. Thirty-two attributes of reusable component are considered for selecting the appropriate component. A Fitness function is calculated on the basis of weight vectors and attribute vectors associated with each component to select the appropriate component.

Journal Article
TL;DR: It seems that software process and process improvement is not realized properly among the software organizations of Bangladesh, which would influence software companies to introduce Software Process Improvement program in their organizations.
Abstract: the process is not understood or the process is not operating at its best. With offshore outsourcing and growing countries like Bangladesh is in a position of exporting their local Information Technology services. However, what is the state of Information Technology in Bangladesh? No documented cases of Bangladesh companies have the maturity of CMM level 3 or above. What are the challenges faced by Information Technology companies in Bangladesh? 2. Current Scenario of Bangladesh in SPI Bangladesh is one of the largest developing countries in the world with a population of more than 135 million. The software industry in Bangladesh has come a long way over the last few decades. In the last five to ten years, a good number of entrepreneurs and talented professionals have come forward to make the industry more dynamic and more vibrant. According to the BASIS (the Bangladesh Association of Software and Information Services), more than three hundred (300) registered software companies are currently operating in Bangladesh. From them, more than fifty (50) software and IT service companies are exporting their product and services to thirty (30) different countries in the world including USA, Canada, European countries, Middle East, Japan, Australia, South Africa and some of the South East Asian countries (Mashroor 2005). However, it is reported that not a single company has achieved SEI's (Software Engineering Institute at Carnegie-Mellon University, USA) SW-CMM or CMMI level 3, though some of them exercise PSP (Personal Software Process) to improve the quality of their development process (Hossain 2004, p.5). CMM/CMMI level three (3) is generally considered the minimum requirement for a company to be eligible to participate in global software industry. Therefore, it seems that software process and process improvement is not realized properly among the software organizations of Bangladesh. Recently, some companies are claiming that they are on the way of gaining CMM/CMMI level 3 even one foreign company claiming, whose local office is in Bangladesh, it has already achieved CMMI level 3, and it is a good news for software industry in Bangladesh. This scenario would influence software companies to introduce Software Process Improvement program in their organizations. ABSTRACT

Journal Article
TL;DR: This paper will define and describe what the Online Voting System should do to ensure a robust, accurate, secure and quality-based design and implementation.
Abstract: Manual voting systems have been deployed for many years with enormous success. If those systems were to be replaced with Electronic Voting Systems, we have to be absolutely sure that they will perform at least as efficient as the traditional voting systems. Failures or flaws in Online Voting Systems will jeopardize Democracy in the country implementing them. The main focus of requirements engineering is on defining and describing what a software system should do to satisfy the informal requirements provided by a statement of need. In this paper, we will define and describe what the Online Voting System should do to ensure a robust, accurate, secure and quality-based design and implementation.

Journal Article
TL;DR: It is found, in an open source system, that a set of object-oriented metrics could predict class error probability and the probability could be used to group classes into error and no-error categories with reasonable accuracies.
Abstract: Object-oriented software metrics have been shown to be able to predict various software quality factors. This paper investigated whether the metrics could predict class error probability and whether the predicted probability could group classes in the object-oriented design. We found, in an open source system, that a set of object-oriented metrics could predict class error probability and the probability could be used to group classes into error and no-error categories with reasonable accuracies.

Journal Article
TL;DR: The reference model for IS/ICT management responsibilities and architectural principles and the reference model are combined in a methodology for assessing the enterprise architecture and guidelines on how to define and manage them are proposed in the sixth paper.
Abstract: Most large enterprises are facing numerous challenges concerning their information systems, IS, and information and communication technology, ICT. Today, many enterprises employ a considerable number of applications that often have redundant functionality. There is also a large diversification in the ICT products and technologies employed. Further, integration costs are a major issue in almost all acquisition projects and many enterprises experience a lack of data quality and information security. The list of IS/ICT management challenges can be made much longer. At most enterprises, IS/ICT decisions are made by autonomous business units. In order to change the situation described above and build a more cost-effective IS/ICT environment, all business units need to make consistent IS/ICT decisions. Distributed and consistent decisions can only be made if the decision maker knows which decisions to make and why he/she needs to make them. The latter can be described by the target architecture for the whole enterprise IS/ICT, the information needed to conduct the business and its relationship to the business processes and business organization together with the benefits that the target architecture provides to the business. Which decisions to make are formulated into architectural principles, i.e. rules that express how your enterprise needs to design and deploy IS/ICT. The present thesis is a composite thesis including eight papers. The first four papers describe the reference model for IS/ICT management responsibilities that is one of the outcomes of the present research. Two different surveys have been performed in order to find out what the major IS/ICT management challenges are. The first survey was answered by 62 Swedish Chief Information Officers, CIOs, from large private enterprises as well as municipalities. The second survey was answered by twelve CIO’s from the European electric power industry. In the fifth paper, one of the IS/ICT management responsibilities, i.e. data quality, is used to illustrate how the IS/ICT manager’s responsibilities can be decomposed into measurable units. Over 70 respondents were used in order to perform an enterprise- wide measurement of the data quality at a Swedish insurance company. The last three papers are devoted to architectural principles. Architectural principles are introduced and guidelines on how to define and manage them are proposed in the sixth paper. The guidelines have been used in a review of Vattenfall’s architectural principles. In the last two papers, architectural principles and the reference model are combined in a methodology for assessing the enterprise architecture. The methodology has been used in two different case studies, one at Vattenfall and one at Scania. In both case studies multiple information systems were assessed from many different viewpoints resulting in that many respondents were interviewed.

Journal Article
TL;DR: This research investigates an alternative approach to model driven development using dynamic models developed interactively with existing code that provides better support for maintaining and evolving software by keeping models more in sync with code and focusing on code integration with models rather than model driven code generation.
Abstract: Large scale enterprise software systems are inherently complex and hard to maintain. To deal with this complexity, current mainstream software engineering practices aim at raising the level of abstraction to visual models described in OMG’s UML modeling language. Current UML tools, however, produce static design diagrams for documentation which quickly become out-ofsync with the software. To address this issue, modeldriven development approaches focus on software automation with generators that translate models into code. Unfortunately, automated code generation tends to emphasize “replacement” rather than “evolution”, and as a result, don’t integrate well with existing legacy code. This research investigates an alternative approach to model driven development using dynamic models developed interactively with existing code. We believe that such an approach provides better support for maintaining and evolving software by keeping models more in sync with code and focusing on code integration with models rather than model driven code generation.

Journal Article
TL;DR: A modeling proposal specially devised for the study of CSCW (Computer-Supported Cooperative Work) systems and the subsequent development of groupware applications is presented, focusing on two specific models: a conceptual domain model formalized through a domain ontology, and a system model built using a UML-based notation.
Abstract: Groupware systems allow users to be part of a shared environment in order to carry out groupwork. Members of a group belong to organizations in which each fulfills general and specific enterprise objectives. This paper presents a modeling proposal specially devised for the study of CSCW (Computer-Supported Cooperative Work) systems and the subsequent development of groupware applications. This research work focuses on two specific models for the proposal: a conceptual domain model formalized through a domain ontology, and a system model built using a UML-based notation. The second stems from the first and each provides a Computation Independent View (CIV) with different objectives. Respectively, they allow a common vocabulary for knowledge sharing to be established, and organization functional requirements to be specified, particularly those concerning communication, coordination and collaboration.

Journal Article
Johannes Mayer1
TL;DR: In the the present paper, an improved version of ART by Random Partitioning is presented employing the notion of restriction, which has the same very good runtime as ART byrandompartitioning while requiring significantly fewer test cases to exhibit the first failure.
Abstract: Adaptive Random Testing (ART) is designed to detect the first failure with fewer test cases than pure Random Testing. Since well-known ART methods, namely Distance-Based ART (D-ART) and Restriction-Based ART (RRT), have quadratic runtime, ART methods based on the idea of partitioning have been presented. ART by Random Partitioning is one of these partition-based ART algorithms. While having only a little bit more than linear asymptotic runtime, the number of test cases necessary to detect the first failure is substantially higher than that of D-ART and RRT. In the the present paper, an improved version of ART by Random Partitioning is presented employing the notion of restriction. The presented algorithm has the same very good runtime as ART by Random Partitioning while requiring significantly fewer test cases to exhibit the first failure.

Journal Article
TL;DR: This paper presents a comprehensive approach to reduce the overhead of consuming string concatenation operations by implicitly transforming the Java bytecode by using a new flow-sensitive intra-procedural static analysis called reaching definitions relation analysis (RDRA).
Abstract: String concatenation via the “+” operator is one of the most convenient things to do in Java, and also one of the most expensive, in terms of memory and performance. In this paper, we present a comprehensive approach to reduce the overhead of consuming string concatenation operations by implicitly transforming the Java bytecode. The transformation is based on the results of the liveness analysis together with a new flow-sensitive intra-procedural static analysis called reaching definitions relation analysis (RDRA). This analysis builds a reaching definitions relation graph (RDRG) for each String variable in a method, and then uses the information encoded in the graph to check the redundancy of the StringBuffer within each string concatenation block (SCB). We have implemented the analyses and the optimizing transformations using the Soot bytecode optimization/annotation framework [1]. We have tested our optimizing program on a small “String torture test” suite as well as a real application program, and we demonstrate significant speedups of executing the transformed Java class files as well as valuable reduction of the class file size.

Journal Article
TL;DR: This work implements mechanisms that automate the process of folding a Petri net to a PrT net and finding invariants, which can compute invariants for systems that use either PetriNet or the extended PrTNet.
Abstract: Petri nets are used to study many types of networked systems. [1] have designed a software tool for analysis and simulation of Petri nets. We extend this tool to handle a form of Petri net known as Predicate Transition (PrT) nets. We implement mechanisms that automate the process of folding a Petri net to a PrT net and finding invariants. We can compute invariants for systems that use either Petri net or the extended PrT net. Invariants are required to prove certain properties of the system being modeled, such as liveness and safety. Finally, for Petri net, we use the invariants to prove these two properties.