scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Software Engineering in 2004"


Journal ArticleDOI
TL;DR: This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service.
Abstract: The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming.

2,872 citations


Journal ArticleDOI
TL;DR: This research is compared and discussed based on a number of different criteria: the refactoring activities that are supported, the specific techniques and formalisms that are used for supporting these activities, the types of software artifacts that are being refactored, the important issues that need to be taken into account when buildingRefactoring tool support, and the effect of refactors on the software process.
Abstract: We provide an extensive overview of existing research in the field of software refactoring. This research is compared and discussed based on a number of different criteria: the refactoring activities that are supported, the specific techniques and formalisms that are used for supporting these activities, the types of software artifacts that are being refactored, the important issues that need to be taken into account when building refactoring tool support, and the effect of refactoring on the software process. A running example is used to explain and illustrate the main concepts.

1,206 citations


Journal ArticleDOI
TL;DR: A comprehensive review of recent research in the field of model-based performance prediction at software development time is presented in order to assess the maturity of the field and point out promising research directions.
Abstract: Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.

803 citations


Journal ArticleDOI
TL;DR: The AHEAD (algebraic hierarchical equations for application design) model is presented, that shows how step-wise refinement scales to synthesize multiple programs and multiple noncode representations, and a tool set that supports AHEAD is reviewed.
Abstract: Step-wise refinement is a powerful paradigm for developing a complex program from a simple program by adding features incrementally. We present the AHEAD (algebraic hierarchical equations for application design) model that shows how step-wise refinement scales to synthesize multiple programs and multiple noncode representations. AHEAD shows that software can have an elegant, hierarchical mathematical structure that is expressible as nested sets of equations. We review a tool set that supports AHEAD. As a demonstration of its viability, we have bootstrapped AHEAD tools from equational specifications, refining Java and nonJava artifacts automatically; a task that was accomplished only by ad hoc means previously.

776 citations


Journal ArticleDOI
TL;DR: It is shown that software failures in a variety of domains were caused by combinations of relatively few conditions, which has important implications for testing.
Abstract: Exhaustive testing of computer software is intractable, but empirical studies of software failures suggest that testing can in some cases be effectively exhaustive. We show that software failures in a variety of domains were caused by combinations of relatively few conditions. These results have important implications for testing. If all faults in a system can be triggered by a combination of n or fewer parameters, then testing all n-tuples of parameters is effectively equivalent to exhaustive testing, if software behavior is not dependent on complex event sequences and variables have a small set of discrete values.

767 citations


Journal ArticleDOI
TL;DR: An approach that applies data mining techniques to determine change patterns can be used to recommend potentially relevant source code to a developer performing a modification task and can reveal valuable dependencies by applying to the Eclipse and Mozilla open source projects.
Abstract: Software developers are often faced with modification tasks that involve source which is spread across a code base. Some dependencies between source code, such as those between source code written in different languages, are difficult to determine using existing static and dynamic analyses. To augment existing analyses and to help developers identify relevant source code during a modification task, we have developed an approach that applies data mining techniques to determine change patterns - sets of files that were changed together frequently in the past - from the change history of the code base. Our hypothesis is that the change patterns can be used to recommend potentially relevant source code to a developer performing a modification task. We show that this approach can reveal valuable dependencies by applying the approach to the Eclipse and Mozilla open source projects and by evaluating the predictability and interestingness of the recommendations produced for actual modification tasks on these systems.

597 citations


Journal ArticleDOI
TL;DR: This work presents a new methodology for automatic verification of C programs against finite state machine specifications using weak simulation as the notion of conformance between the program and its specification.
Abstract: We present a new methodology for automatic verification of C programs against finite state machine specifications. Our approach is compositional, naturally enabling us to decompose the verification of large software systems into subproblems of manageable complexity. The decomposition reflects the modularity in the software design. We use weak simulation as the notion of conformance between the program and its specification. Following the counterexample guided abstraction refinement (CEGAR) paradigm, our tool MAGIC first extracts a finite model from C source code using predicate abstraction and theorem proving. Subsequently, weak simulation is checked via a reduction to Boolean satisfiability. MAGIC has been interfaced with several publicly available theorem provers and SAT solvers. We report experimental results with procedures from the Linux kernel, the OpenSSL toolkit, and several industrial strength benchmarks.

413 citations


Journal ArticleDOI
TL;DR: A solution is presented, based on the use of three levels of abstractions, that allows designers to focus on the relevant logical aspects and avoid dealing with a plethora of low-level details in development of nomadic applications.
Abstract: The increasing availability of new types of interaction platforms raises a number of issues for designers and developers. There is a need for new methods and tools to support development of nomadic applications, which can be accessed through a variety of devices. We present a solution, based on the use of three levels of abstractions, that allows designers to focus on the relevant logical aspects and avoid dealing with a plethora of low-level details. We have defined a number of transformations able to obtain user interfaces from such abstractions, taking into account the available platforms and their interaction modalities while preserving usability. The transformations are supported by an authoring tool, TERESA, which provides designers and developers with various levels of automatic support and several possibilities for tailoring such transformations to their needs.

383 citations


Journal ArticleDOI
TL;DR: The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results.
Abstract: A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.

380 citations


Journal ArticleDOI
TL;DR: How coupling can be defined and precisely measured based on dynamic analysis of systems is described and some dynamic coupling measures are significant indicators of change proneness and that they complement existing coupling measures based on static analysis.
Abstract: The relationships between coupling and external quality factors of object-oriented software have been studied extensively for the past few years. For example, several studies have identified clear empirical relationships between class-level coupling and class fault-proneness. A common way to define and measure coupling is through structural properties and static code analysis. However, because of polymorphism, dynamic binding, and the common presence of unused ("dead") code in commercial software, the resulting coupling measures are imprecise as they do not perfectly reflect the actual coupling taking place among classes at runtime. For example, when using static analysis to measure coupling, it is difficult and sometimes impossible to determine what actual methods can be invoked from a client class if those methods are overridden in the subclasses of the server classes. Coupling measurement has traditionally been performed using static code analysis, because most of the existing work was done on nonobject oriented code and because dynamic code analysis is more expensive and complex to perform. For modern software systems, however, this focus on static analysis can be problematic because although dynamic binding existed before the advent of object-orientation, its usage has increased significantly in the last decade. We describe how coupling can be defined and precisely measured based on dynamic analysis of systems. We refer to this type of coupling as dynamic coupling. An empirical evaluation of the proposed dynamic coupling measures is reported in which we study the relationship of these measures with the change proneness of classes. Data from maintenance releases of a large Java system are used for this purpose. Preliminary results suggest that some dynamic coupling measures are significant indicators of change proneness and that they complement existing coupling measures based on static analysis.

367 citations


Journal ArticleDOI
TL;DR: It is concluded that a polylithic approach is most suitable for toolkit builders, visual design software where code is automatically generated, and application builders where there is much customization of the toolkit.
Abstract: Here, we analyze toolkit designs for building graphical applications with rich user interfaces, comparing polylithic and monolithic toolkit-based solutions. Polylithic toolkits encourage extension by composition and follow a design philosophy similar to 3D scene graphs supported by toolkits including JavaSD and Openlnventor. Monolithic toolkits, on the other hand, encourage extension by inheritance, and are more akin to 2D graphical user interface toolkits such as Swing or MFC. We describe Jazz (a polylithic toolkit) and Piccolo (a monolithic toolkit), each of which we built to support interactive 2D structured graphics applications in general, and zoomable user interface applications in particular. We examine the trade offs of each approach in terms of performance, memory requirements, and programmability. We conclude that a polylithic approach is most suitable for toolkit builders, visual design software where code is automatically generated, and application builders where there is much customization of the toolkit. Correspondingly, we find that monolithic approaches appear to be best for application builders where there is not much customization of the toolkit.

Journal ArticleDOI
TL;DR: It is concluded that, when implementing or switching to the open-source development model, practitioners should ensure that an appropriate metrics collection strategy is in place to verify the perceived benefits.
Abstract: We describe an empirical study of open-source and closed-source software projects. The motivation for this research is to quantitatively investigate common perceptions about open-source projects, and to validate these perceptions through an empirical study. We investigate the hypothesis that open-source software grows more quickly, but does not find evidence to support this. The project growth is similar for all the projects in the analysis, indicating that other factors may limit growth. The hypothesis that creativity is more prevalent in open-source software is also examined, and evidence to support this hypothesis is found using the metric of functions added over time. The concept of open-source projects succeeding because of their simplicity is not supported by the analysis, nor is the hypothesis of open-source projects being more modular. However, the belief that defects are found and fixed more rapidly in open-source projects is supported by an analysis of the functions modified. We find support for two of the five common beliefs and conclude that, when implementing or switching to the open-source development model, practitioners should ensure that an appropriate metrics collection strategy is in place to verify the perceived benefits.

Journal ArticleDOI
TL;DR: The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models and is illustrated by specifying observer and visitor pattern solutions.
Abstract: Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.

Journal ArticleDOI
TL;DR: The results support the intuitive notion that a methodical and structured approach to program investigation is the most effective.
Abstract: Prior to performing a software change task, developers must discover and understand the subset of the system relevant to the task. Since the behavior exhibited by individual developers when investigating a software system is influenced by intuition, experience, and skill, there is often significant variability in developer effectiveness. To understand the factors that contribute to effective program investigation behavior, we conducted a study of five developers performing a change task on a medium-size open source system. We isolated the factors related to effective program investigation behavior by performing a detailed qualitative analysis of the program investigation behavior of successful and unsuccessful developers. We report on these factors as a set of detailed observations, such as evidence of the phenomenon of inattention blindness by developers skimming source code. In general, our results support the intuitive notion that a methodical and structured approach to program investigation is the most effective.

Journal ArticleDOI
TL;DR: This work presents a process to elicit NFRs, analyze their interdependencies, and trace them to functional conceptual models, focusing on conceptual models expressed using UML (Unified Modeling Language).
Abstract: Nonfunctional requirements (NFRs) have been frequently neglected or forgotten in software design. They have been presented as a second or even third class type of requirement, frequently hidden inside notes. We tackle this problem by treating NFRs as first class requirements. We present a process to elicit NFRs, analyze their interdependencies, and trace them to functional conceptual models. We focus our attention on conceptual models expressed using UML (Unified Modeling Language). Extensions to UML are proposed to allow NFRs to be expressed. We show how to integrate NFRs into the class, sequence, and collaboration diagrams. We also show how use cases and scenarios can be adapted to deal with NFRs. This work was used in three case studies and their results suggest that by using our proposal we can improve the quality of the resulting conceptual models.

Journal ArticleDOI
TL;DR: An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated.
Abstract: A testability transformation is a source-to-source transformation that aims to improve the ability of a given test generation method to generate test data for the original program. We introduce testability transformation, demonstrating that it differs from traditional transformation, both theoretically and practically, while still allowing many traditional transformation rules to be applied. We illustrate the theory of testability transformation with an example application to evolutionary testing. An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated.

Journal ArticleDOI
TL;DR: The problem of configuration verification for the extended FSM (EFSM) model is investigated and it is demonstrated that this problem could be reduced to the EFSM traversal problem, so that the existing methods and tools developed in the context of model checking become applicable.
Abstract: We investigate the problem of configuration verification for the extended FSM (EFSM) model. This is an extension of the FSM state identification problem. Specifically, given a configuration ("state vector") and an arbitrary set of configurations, determine an input sequence such that the EFSM in the given configuration produces an output sequence different from that of the configurations in the given set or at least in a maximal proper subset. Such a sequence can be used in a test case to confirm the destination configuration of a particular EFSM transition. We demonstrate that this problem could be reduced to the EFSM traversal problem, so that the existing methods and tools developed in the context of model checking become applicable. We introduce notions of EFSM projections and products and, based on these notions, we develop a theoretical framework for determining configuration-confirming sequences. The proposed approach is illustrated on a realistic example.

Journal ArticleDOI
TL;DR: A tool that supports verification of workflow models specified in UML activity diagrams is described that translates an activity diagram into an input format for a model checker according to a mathematical semantics.
Abstract: We describe a tool that supports verification of workflow models specified in UML activity diagrams. The tool translates an activity diagram into an input format for a model checker according to a mathematical semantics. With the model checker, arbitrary propositional requirements can be checked against the input model. If a requirement fails to hold, an error trace is returned by the model checker, which our tool presents by highlighting a corresponding path in the activity diagram. We summarize our formal semantics, discuss the techniques used to reduce an infinite state space to a finite one, and motivate the need for strong fairness constraints to obtain realistic results. We define requirement-preserving rules for state space reduction. Finally, we illustrate the whole approach with a few example verifications.

Journal ArticleDOI
TL;DR: The results show that the most skilled developers, in particular, the senior consultants, require less time to maintain software with a delegated control style than with a centralized control style, however, more novice developers have serious problems understanding a delegatedControl style, and perform far better with a central control style.
Abstract: A fundamental question in object-oriented design is how to design maintainable software. According to expert opinion, a delegated control style, typically a result of responsibility-driven design, represents object-oriented design at its best, whereas a centralized control style is reminiscent of a procedural solution, or a "bad" object-oriented design. We present a controlled experiment that investigates these claims empirically. A total of 99 junior, intermediate, and senior professional consultants from several international consultancy companies were hired for one day to participate in the experiment. To compare differences between (categories of) professionals and students, 59 students also participated. The subjects used professional Java tools to perform several change tasks on two alternative Java designs that had a centralized and delegated control style, respectively. The results show that the most skilled developers, in particular, the senior consultants, require less time to maintain software with a delegated control style than with a centralized control style. However, more novice developers, in particular, the undergraduate students and junior consultants, have serious problems understanding a delegated control style, and perform far better with a centralized control style. Thus, the maintainability of object-oriented software depends, to a large extent, on the skill of the developers who are going to maintain it. These results may have serious implications for object-oriented development in an industrial context: having senior consultants design object-oriented systems may eventually pose difficulties unless they make an effort to keep the designs simple, as the cognitive complexity of "expert" designs might be unmanageable for less skilled maintainers.

Journal ArticleDOI
TL;DR: It is shown that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code, and though useful, they need to be complemented with black-box, functional testing.
Abstract: This work describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.

Journal ArticleDOI
TL;DR: A UML 1.5 profile endowed with a formal semantics given in terms of RT-LOTOS, which relies on UML's extensibility mechanisms to enhance class and activity diagrams and its application in the context of space-based embedded software is discussed.
Abstract: We present a UML 1.5 profile named TURTLE (Timed UML and RT-LOTOS Environment) endowed with a formal semantics given in terms of RT-LOTOS. TURTLE relies on UML's extensibility mechanisms to enhance class and activity diagrams. Class diagrams are extended with specialized classes named Tclasses, which communicate and synchronize through gates. Also, associations between Tclasses are attributed by a composition operator (Parallel, Synchro, Invocation, Sequence, or Preemption) which provides them with a formal semantics. TURTLE extends UML activity diagrams with synchronization actions and temporal operators (deterministic delay, nondeterministic delay, time-limited offer, and time-capture). The real-time dimension of TURTLE has been further improved by the addition of two composition operators, periodic and suspend, as well as suspendable delay, latency, and time-limited offer operators at the activity diagram level. Core characteristics of TURLE are supported by TTool - the TURTLE toolkit - which includes a diagram editor, a RT-LOTOS code generator and a result analyzer. The toolkit reuses RTL, a RT-LOTOS validation tool offering debug-oriented simulation and exhaustive analysis. TTool hides RT-LOTOS to the end-user and allows him/her to directly check TURTLE modeling against logical errors and timing inconsistencies. Besides the foundations of the TURTLE profile, this paper also discusses its application in the context of space-based embedded software.

Journal ArticleDOI
TL;DR: It is recommended that software companies combine estimation error information from in-depth interviews with stakeholders in all relevant roles, estimation experience reports, and results from statistical analyses of project characteristics.
Abstract: This study aims to improve analyses of why errors occur in software effort estimation. Within one software development company, we collected information about estimation errors through: 1) interviews with employees in different roles who are responsible for estimation, 2) estimation experience reports from 68 completed projects, and 3) statistical analysis of relations between characteristics of the 68 completed projects and estimation error. We found that the role of the respondents, the data collection approach, and the type of analysis had an important impact on the reasons given for estimation error. We found, for example, a strong tendency to perceive factors outside the respondents' own control as important reasons for inaccurate estimates. Reasons given for accurate estimates, on the other hand, typically cited factors that were within the respondents' own control and were determined by the estimators' skill or experience. This bias in types of reason means that the collection only of project managers' viewpoints will not yield balanced models of reasons for estimation error. Unfortunately, previous studies on reasons for estimation error have tended to collect information from project managers only. We recommend that software companies combine estimation error information from in-depth interviews with stakeholders in all relevant roles, estimation experience reports, and results from statistical analyses of project characteristics

Journal ArticleDOI
TL;DR: This work proposes using the expression AdjustedSize/Effort to measure productivity, and discusses the assumptions underlying this productivity measurement method and presents an example of its use for Web application projects.
Abstract: Productivity measures based on a simple ratio of product size to project effort assume that size can be determined as a single measure. If there are many possible size measures in a data set and no obvious model for aggregating the measures into a single measure, we propose using the expression AdjustedSize/Effort to measure productivity. AdjustedSize is defined as the most appropriate regression-based effort estimation model, where all the size measures selected for inclusion in the estimation model have a regression parameter significantly different from zero (p<0.05). This productivity measurement method ensures that each project has an expected productivity value of one. Values between zero and one indicate lower than expected productivity, values greater than one indicate higher than expected productivity. We discuss the assumptions underlying this productivity measurement method and present an example of its use for Web application projects. We also explain the relationship between effort prediction models and productivity models.

Journal ArticleDOI
TL;DR: This research exploits the specification of SA dynamics to identify useful schemes of interactions between system components and to select test classes corresponding to relevant architectural behaviors, using labeled transition systems called ALTSs.
Abstract: Our research deals with the use of software architecture (SA) as a reference model for testing the conformance of an implemented system with respect to its architectural specification. We exploit the specification of SA dynamics to identify useful schemes of interactions between system components and to select test classes corresponding to relevant architectural behaviors. The SA dynamics is modeled by labeled transition systems (LTSs). The approach consists of deriving suitable LTS abstractions called ALTSs. ALTSs offer specific views of SA dynamics by concentrating on relevant features and abstracting away from uninteresting ones. Intuitively, deriving an adequate set of test classes entails deriving a set of paths that appropriately cover the ALTS. Next, a relation between these abstract SA tests and more concrete, executable tests needs to be established so that the architectural tests derived can be refined into code-level tests. We use the TRMCS case study to illustrate our hands-on experience. We discuss the insights gained and highlight some issues, problems, and solutions of general interest in architecture-based testing.

Journal ArticleDOI
TL;DR: Based on a qualitative analysis of the code and the nature of the patterns, it is concluded that the Observer and Singleton patterns are correlated with larger code structures and, so, can serve as indicators of code that requires special attention.
Abstract: Software "design patterns" seek to package proven solutions to design problems in a form that makes it possible to find, adapt, and reuse them. A common claim is that a design based on properly applied patterns will have fewer defects than more ad hoc solutions. This case study analyzes the weekly evolution and maintenance of a large commercial product (C++, 500,000 LOC) over three years, comparing defect rates for classes that participated in selected design patterns to the code at large. We found that there are significant differences in defect rates among the patterns, ranging from 63 percent to 154 percent of the average rate. We developed a new set of tools able to extract design pattern information at a rate of 3/spl times/10/sup 6/ lines of code per hour, with relatively high precision. Based on a qualitative analysis of the code and the nature of the patterns, we conclude that the Observer and Singleton patterns are correlated with larger code structures and, so, can serve as indicators of code that requires special attention. Conversely, code designed with the Factory pattern is more compact and possibly less closely coupled and, consequently, has lower defect numbers. The Template Method pattern was used in both simple and complex situations, leading to no clear tendency.

Journal ArticleDOI
TL;DR: A modeling notation is introduced which extends time Petri nets with an additional mechanism of resource assignment making the progress of timed transitions be dependent on the availability of a set of preemptable resources.
Abstract: A modeling notation is introduced which extends time Petri nets with an additional mechanism of resource assignment making the progress of timed transitions be dependent on the availability of a set of preemptable resources The resulting notation, which we call preemptive time Petri nets, permits natural description of complex real-time systems running under preemptive scheduling, with periodic, sporadic, and one-shot processes, with nondeterministic execution times, with semaphore synchronizations and precedence relations deriving from internal task sequentialization and from interprocess communication, running on multiple processors A state space analysis technique is presented which supports the validation of preemptive time Petri net models, combining tight schedulability analysis with exhaustive verification of the correctness of logical sequencing The analysis technique partitions the state space in equivalence classes in which timing constraints are represented in the form of difference bounds matrixes This permits it to maintain a polynomial complexity in the representation and derivation of state classes, but it does not tightly encompass the constraints deriving from preemptive behavior, thus producing an enlarged representation of the state space False behaviors deriving from the approximation can be cleaned-up through an algorithm which provides a necessary and sufficient condition for the feasibility of a behavior along with a tight estimation of its timing profile

Journal ArticleDOI
TL;DR: XACT is introduced, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values based on XPath, which permits static type checking using DTD schemas as types.
Abstract: XML documents generated dynamically by programs are typically represented as text strings or DOM trees. This is a low-level approach for several reasons: 1) traversing and modifying such structures can be tedious and error prone, 2) although schema languages, e.g., DTD, allow classes of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce XACT, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values based on XPath. In addition to an efficient runtime representation, the data type permits static type checking using DTD schemas as types. By specifying schemes for the input and output of a program, our analysis algorithm will statically verify that valid input data is always transformed into valid output data and that the operations are used consistently.

Journal ArticleDOI
TL;DR: Using information about Web accesses, various measurements are derived that can characterize Web site workload at different levels of granularity and from different perspectives and are used to evaluate the operational reliability for source contents at a given Web site and the potential for reliability improvement.
Abstract: We characterize usage and problems for Web applications, evaluate their reliability, and examine the potential for reliability improvement. Based on the characteristics of Web applications and the overall Web environment, we classify Web problems and focus on the subset of source content problems. Using information about Web accesses, we derive various measurements that can characterize Web site workload at different levels of granularity and from different perspectives. These workload measurements, together with failure information extracted from recorded errors, are used to evaluate the operational reliability for source contents at a given Web site and the potential for reliability improvement. We applied this approach to the Web sites www.seas.smu.edu and www.kde.org. The results demonstrated the viability and effectiveness of our approach.

Journal ArticleDOI
TL;DR: Results indicate that search patterns that rapidly switch between the two design diagrams are the most effective, which support the cognitive theory thesis that how an individual processes information impacts processing success.
Abstract: Reviews and inspections of software artifacts throughout the development life cycle are effective techniques for identifying defects and improving software quality. While review methods for text-based artifacts (e.g., code) are well understood, very little guidance is available for performing reviews of software diagrams, which are rapidly becoming the dominant form of software specification and design. Drawing upon human cognitive theory, we study how 12 experienced software developers perform individual reviews on a software design containing two types of diagrams: entity-relationship diagrams and data flow diagrams. Verbal protocol methods are employed to describe and analyze defect search patterns among the software artifacts, both text and diagrams, within the design. Results indicate that search patterns that rapidly switch between the two design diagrams are the most effective. These findings support the cognitive theory thesis that how an individual processes information impacts processing success. We conclude with specific recommendations for improving the practice of reviewing software diagrams.

Journal ArticleDOI
TL;DR: There is strong evidence that developers tend to use the extraneous functionality in the artifacts they are reusing and some evidence of anchoring to errors and omissions in reused artifacts.
Abstract: The extensive literature on reuse in software engineering has focused on technical and organizational factors, largely ignoring cognitive characteristics of individual developers. Despite anecdotal evidence that cognitive heuristics play a role in successful artifact reuse, few empirical studies have explored this relationship. This paper proposes how a cognitive heuristic, called anchoring, and the resulting adjustment bias can be adapted and extended to predict issues that might arise when developers reuse code and/or designs. The research proposes that anchoring and adjustment can be manifested in three ways: propagation of errors in reuse artifacts, failure to include requested functionality absent from reuse artifacts, and inclusion of unrequested functionality present in reuse artifacts. Results from two empirical studies are presented. The first study examines reuse of object classes in a programming task, using a combination of practicing programmers and students. The second study uses a database design task with student participants. Results from both studies indicate that anchoring occurs. Specifically, there is strong evidence that developers tend to use the extraneous functionality in the artifacts they are reusing and some evidence of anchoring to errors and omissions in reused artifacts. Implications of these findings for both practice and future research are explored.