scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 2011"


Journal ArticleDOI
TL;DR: The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns.
Abstract: Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd.

4,570 citations


Journal ArticleDOI
TL;DR: While some feature ranking techniques performed similarly, the automatic hybrid search algorithm performed the best among the feature subset selection methods, and performances of the defect prediction models either improved or remained unchanged when over 85 metrics were eliminated.
Abstract: The selection of software metrics for building software quality prediction models is a search-based software engineering problem. An exhaustive search for such metrics is usually not feasible due to limited project resources, especially if the number of available metrics is large. Defect prediction models are necessary in aiding project managers for better utilizing valuable project resources for software quality improvement. The efficacy and usefulness of a fault-proneness prediction model is only as good as the quality of the software measurement data. This study focuses on the problem of attribute selection in the context of software quality estimation. A comparative investigation is presented for evaluating our proposed hybrid attribute selection approach, in which feature ranking is first used to reduce the search space, followed by a feature subset selection. A total of seven different feature ranking techniques are evaluated, while four different feature subset selection approaches are considered. The models are trained using five commonly used classification algorithms. The case study is based on software metrics and defect data collected from multiple releases of a large real-world software system. The results demonstrate that while some feature ranking techniques performed similarly, the automatic hybrid search algorithm performed the best among the feature subset selection methods. Moreover, performances of the defect prediction models either improved or remained unchanged when over 85were eliminated. Copyright © 2011 John Wiley & Sons, Ltd.

252 citations


Journal ArticleDOI
TL;DR: To determine how widely the notations of the UML, and their usefulness, have been studied empirically, and to identify which aspects of it have been study in most detail, a mapping study of the literature was undertaken.
Abstract: The Unified Modeling Language (UML) was created on the basis of expert opinion and has now become accepted as the ‘standard’ object-oriented modelling notation. Our objectives were to determine how widely the notations of the UML, and their usefulness, have been studied empirically, and to identify which aspects of it have been studied in most detail. We undertook a mapping study of the literature to identify relevant empirical studies and to classify them in terms of the aspects of the UML that they studied. We then conducted a systematic literature review, covering empirical studies published up to the end of 2008, based on the main categories identified. We identified 49 relevant publications, and report the aggregated results for those categories for which we had enough papers— metrics, comprehension, model quality, methods and tools and adoption. Despite indications that a number of problems exist with UML models, researchers tend to use the UML as a ‘given’ and seem reluctant to ask questions that might help to make it more effective. Copyright © 2010 John Wiley & Sons, Ltd.

133 citations


Journal ArticleDOI
TL;DR: The main contribution is the definition of novel measures connected to the diagrams to achieve the following goals: increase throughput and reduce lead‐time to achieve high responsiveness to customers' needs and to provide a tracking system that shows the progress/status of software product development.
Abstract: Responsiveness to customer needs is an important goal in agile and lean software development. One major aspect is to have a continuous and smooth flow that quickly delivers value to the customer. In this paper we apply cumulative flow diagrams to visualize the flow of lean software development. The main contribution is the definition of novel measures connected to the diagrams to achieve the following goals: (1) increase throughput and reduce lead-time to achieve high responsiveness to customers' needs and (2) to provide a tracking system that shows the progress/status of software product development. An evaluation of the measures in an industrial case study showed that practitioners found them useful and identify improvements based on the measurements, which were in line with lean and agile principles. Furthermore, the practitioners found the measures useful in seeing the progress of development for complex products where many tasks are executed in parallel. The measures are now an integral part of the improvement work at the studied company. Copyright © 2010 John Wiley & Sons, Ltd.

128 citations


Journal ArticleDOI
TL;DR: SPINE (signal processing in‐node environment), a domain‐specific framework for rapid prototyping of WBSN applications, which is lightweight and flexible enough to be easily customized to fit particular application‐specific needs, is presented.
Abstract: Wireless body sensor networks (WBSNs) enable a broad range of applications for continuous and real-time health monitoring and medical assistance. Programming WBSN applications is a complex task especially due to the limitation of resources of typical hardware platforms and to the lack of suitable software abstractions. In this paper, SPINE (signal processing in-node environment), a domain-specific framework for rapid prototyping of WBSN applications, which is lightweight and flexible enough to be easily customized to fit particular application-specific needs, is presented. The architecture of SPINE has two main components: one implemented on the node coordinating the WBSN and one on the nodes with sensors. The former is based on a Java application, which allows to configure and manage the network and implements the classification functions that are too heavy to be implemented on the sensor nodes. The latter supports sensing, computing and data transmission operations through a set of libraries, protocols and utility functions that are currently implemented for TinyOS platforms. SPINE allows evaluating different architectural choices and deciding how to distribute signal processing and classification functions over the nodes of the network. Finally, this paper describes an activity monitoring application and presents the benefits of using the SPINE framework. Copyright © 2010 John Wiley & Sons, Ltd. (This paper is a significantly extended version of R. Gravina, A. Guerrieri, G. Fortino, F. Bellifemine, R. Giannantonio, M. Sgrio, ‘Development of Body Sensor Network Applications using SPINE,’ In Proc. of. IEEE International Conference on Systems, Man, and Cybernetics (SMC 2008), Singapore, Oct. 12–15, 2008.)

103 citations


Journal ArticleDOI
TL;DR: In this organizational setting, cooperation between the Agile developers and UX designers was achieved through ongoing articulation work by the developers, who were compelled to engage a culturally distinct UX design division.
Abstract: Previous discussions of how User Experience (UX) designers and Agile developers can work together have focused on bringing the disciplines together by merging their processes or adopting specific techniques. This paper reports in detail on one observational study of a mature Scrum team in a large organization, and their interactions with the UX designers working on the same project. The evidence from our study shows that Agile development and UX design practice is not explained by rationalized accounts dealing with processes or techniques. Instead, understanding practice requires examining the wider organizational setting in which the Agile developers and UX designers are embedded. Our account focuses on the situatedness of the work by making reference to values and assumptions in the organizational setting, and the consequences that those values and assumptions had for practice. In this organizational setting, cooperation between the Agile developers and UX designers was achieved through ongoing articulation work by the developers, who were compelled to engage a culturally distinct UX design division. Based on this study, insights into culture, self-organization and purposeful work highlight significant implications for practice. Copyright © 2011 John Wiley & Sons, Ltd.

78 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic literature review of experiences and practices on APLE, in which the key findings uncover important challenges about how to integrate the SPLE model with an agile iterative approach to fully put APLE into practice.
Abstract: Software Product Line Engineering (SPLE) demands upfront long-term investment in (i) designing a common set of core-assets and (ii) managing variability across the products from the same family. When anticipated changes in these core-assets have been predicted with certain accuracy, SPLE has proved significant improvements. However, when large/complex software product line projects have to deal with changing market conditions, alternatives to supplement SPLE are required. Agile Software Development (ASD) may be an alternative, as agile processes harness change for the customer's competitive advantage. However, when the aim is to scale Agile projects up to effectively manage reusability and variability across the products from the same family, alternatives to supplement agility are also required. As a result, a new approach called Agile Product Line Engineering (APLE) advocates integrating SPLE and ASD with the aim of addressing these gaps. APLE is an emerging approach, which implies that organizations have to face several barriers to achieve its adoption. This paper presents a systematic literature review of experiences and practices on APLE, in which the key findings uncover important challenges about how to integrate the SPLE model with an agile iterative approach to fully put APLE into practice. Copyright © 2011 John Wiley & Sons, Ltd.

75 citations


Journal ArticleDOI
TL;DR: A model‐driven software process suitable to develop a set of integrated tools around a formal method that exploits concepts and technologies of the Model‐driven Engineering (MDE) approach, such as metamodelling and automatic generation of software artifacts from models.
Abstract: This paper presents a model-driven software process suitable to develop a set of integrated tools around a formal method. This process exploits concepts and technologies of the Model-driven Engineering (MDE) approach, such as metamodelling and automatic generation of software artifacts from models. We describe the requirements to fulfill and the development steps of this model-driven process. As a proof-of-concept, we apply it to the Finite State Machines and we report our experience in engineering a metamodel-based language and a toolset for the Abstract State Machine formal method. Copyright © 2011 John Wiley & Sons, Ltd. This work was partially supported by the Italian Government under the project PRIN 2007 D-ASAP (2007XKEHFA).

73 citations


Journal ArticleDOI
TL;DR: Agile methods have matured since the academic community suggested almost a decade ago that they were not suitable for safety‐critical systems; the experiences on the image‐guided surgical toolkit project are presented as a case study for renewing the discussion.
Abstract: The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities, such as formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested almost a decade ago that they were not suitable for safety-critical systems; we present our experiences as a case study for renewing the discussion. Copyright © 2011 John Wiley & Sons, Ltd.

62 citations


Journal ArticleDOI
TL;DR: Results are presented from the application of the proposed search‐based project planning approach to data obtained from two large‐scale commercial software maintenance projects to minimize the completion time and reduce schedule fragmentation.
Abstract: Allocating resources to a software project and assigning tasks to teams constitute crucial activities that affect project cost and completion time. Finding a solution for such a problem is NP-hard; this requires managers to be supported by proper tools for performing such an allocation. This paper shows how search-based optimization techniques can be combined with a queuing simulation model to address these problems. The obtained staff and task allocations aim to minimize the completion time and reduce schedule fragmentation. The proposed approach allows project managers to run multiple simulations, compare results and consider trade-offs between increasing the staffing level and anticipating the project completion date and between reducing the fragmentation and accepting project delays. The paper presents results from the application of the proposed search-based project planning approach to data obtained from two large-scale commercial software maintenance projects. Copyright © 2011 John Wiley & Sons, Ltd.

62 citations


Journal ArticleDOI
TL;DR: Two methods have been identified for Event‐B model decomposition: shared variable and shared event and the respective tool support in the Rodin platform is introduced.
Abstract: Two methods have been identified for Event-B model decomposition: shared variable and shared event. The purpose of this paper is to introduce the two approaches and the respective tool support in the Rodin platform. Besides alleviating the complexity for large systems and respective proofs, decomposition allows team development in parallel over the same Event-B project which is very attractive in the industrial environment. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper applies a genetic algorithm to generate the best refactoring schedule within a reasonable time and compares the GA‐based approach with manual scheduling, greedy heuristic‐based, and exhaustive approaches for four open systems to show that it generates more beneficial schedules than the others.
Abstract: Refactoring is a widely accepted technique to improve the software quality by restructuring its design without changing its behavior. In general, a sequence of refactorings needs to be applied until the quality of the code is improved satisfactorily. In this case, the final design after refactoring can vary with the application order of refactorings, thereby producing different quality improvements. Therefore, it is necessary to determine a proper refactoring schedule to obtain as many benefits as possible. However, there is little research on the problem of generating appropriate schedules to maximize quality improvement. In this paper, we propose an approach to automatically determine an appropriate schedule to maximize quality improvement through refactoring. We first detect code clones that are suitable for refactoring and generate the most beneficial refactoring schedule to remove them. It is straightforward to select the best from the exhaustively enumerated schedules. However, such a technique becomes NP-hard, as the number of available refactorings increases. We apply a genetic algorithm (GA) to generate the best refactoring schedule within a reasonable time to cope with this problem. We compare the GA-based approach with manual scheduling, greedy heuristic-based, and exhaustive approaches for four open systems. The results show that the proposed GA-based approach generates more beneficial schedules than the others. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work identifies individual‐level constraints and team‐ level constraints of human resource allocation based on the literature and interviews with experts in the industry, and provides a guideline supporting various factors, with respect to roles and module characteristics, to estimate the productivity of developers based on COCOMO II.
Abstract: Resource allocation in a software project is crucial for successful software development. Among various types of resources, human resource is the most important as software development is a human-intensive activity. Human resource allocation is very complex owing to the human characteristics of developers. The human characteristics affecting allocation can be grouped into individual-level characteristics and team-level characteristics. At the individual level, familiarity with tasks needs to be taken into account as it affects the performance of developers. In addition, developers have different levels of productivity, depending on their capability and experience; the productivity of developers also varies according to tasks. At the team level, characteristics such as team cohesion, communication overhead, and collaboration and management also affect human resource allocation. As these characteristics affect the efficiency of project execution, we treat them as constraints of human resource allocation in our approach. We identify individual-level constraints and team-level constraints based on the literature and interviews with experts in the industry. With these constraints, our approach optimizes the scheduling of human resource allocations, resulting in more realistic and efficient allocations. We also provide a guideline supporting various factors, with respect to roles and module characteristics, to estimate the productivity of developers based on COCOMO II. As productivity data are hard to obtain and manage, our guideline can provide a useful direction for human resource allocation in case of software projects. To validate our proposed approach, we document a case study using real project data. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The study shows that HiP‐HOPS can overcome the limitations of earlier work based on Reliability Block Diagrams by enabling dependability analysis and optimization of architectures that may have a network topology and exhibit multiple failure modes.
Abstract: New processes for the design of dependable systems must address both cost and dependability concerns. They should also maximize the potential for automation to address the problem of increasing technological complexity and the potentially immense design spaces that need to be explored. In this paper we show a design process that integrates system modelling, automated dependability analysis and evolutionary optimization techniques to achieve the optimization of designs with respect to dependability and cost from the early stages. Computerized support is provided for difficult aspects of fault tolerant design, such as decision making on the type and location of fault detection and fault tolerant strategies. The process is supported by HiP-HOPS, a scalable automated dependability analysis and optimization tool. The process was applied to a Pre-collision system for vehicles at an early stage of its design. The study shows that HiP-HOPS can overcome the limitations of earlier work based on Reliability Block Diagrams by enabling dependability analysis and optimization of architectures that may have a network topology and exhibit multiple failure modes. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper improves the state of the art by presenting a component model for hard real‐time systems and defining the semantics of different types of component interactions and presenting an implementation of a middleware that supports this component model.
Abstract: The size and complexity of software in safety-critical systems is increasing at a rapid pace. One technology that can be used to mitigate this complexity is component-based software development. However, in spite of the apparent benefits of a component-based approach to development, little work has been done in applying these concepts to hard real-time systems. This paper improves the state of the art by making three contributions: (1) we present a component model for hard real-time systems and define the semantics of different types of component interactions; (2) we present an implementation of a middleware that supports this component model. This middleware combines an open-source CORBA Component Model (CCM) implementation (MICO) with ARINC-653: a state-of-the-art real-time operating systems (RTOS) standard, (3) finally; we describe a modeling environment that enables design, analysis, and deployment of component assemblies. We conclude with a discussion of the lessons learned during this exercise. Our experiences point toward extending both the CCM as well as revising the ARINC-653. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Experimental studies demonstrate that E3‐R efficiently obtains quality deployment configurations that satisfy given SLAs and exhibit the trade‐offs among conflicting QoS objectives.
Abstract: This paper focuses on service deployment optimization in cloud computing environments. In a cloud, an application is assumed to consist of multiple services. Each service in an application can be deployed as one or more service instances. Different service instances operate at different quality of service (QoS) levels depending on the amount of computing resources assigned to them. In order to satisfy given performance requirements, i.e. service level agreements (SLAs), each application is required to optimize its deployment configuration such as the number of service instances, the amount of computing resources to assign and the locations of service instances. Since this problem is NP-hard and often faces trade-offs among conflicting QoS objectives in SLAs, existing optimization methods often fail to solve it. mathrmE3-R is a multiobjective genetic algorithm that seeks a set of Pareto-optimal deployment configurations that satisfy SLAs and exhibit the trade-offs among conflicting QoS objectives. By leveraging queueing theory, E3-R estimates the performance of an application and aids defining SLAs in a probabilistic manner. Moreover, E3-R automatically reduces the number of QoS objectives and improves the quality of solutions further. Experimental studies demonstrate that E3-R efficiently obtains quality deployment configurations that satisfy given SLAs. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This study investigates the state‐of‐the‐art in Agile SPL approaches, identifies the current gaps in the research, synthesize the available evidence and proposes specific Agile methods and practices for integration in SPL.
Abstract: Background: Software product lines and Agile methods have been an effective solution for dealing with the growing complexity of software and handling competitive needs of software organizations. They also share common goals, such as improving productivity, reducing time-to-market, decreasing development costs and increasing customer satisfaction. There has been growing interest in whether the integration of Agile and SPL could provide further benefits and solve many of the outstanding issues surrounding software development. Objective: This study investigates the state-of-the-art in Agile SPL approaches, while identifying gaps in current research and synthesizing available evidence. It also provides a basis for a deeper understanding of the issues involved in the integration of Agile and SPL. Method: A mapping study was undertaken to analyze the relation between Agile and SPL methods. A set of four research questions were defined in which the 32 primary studies were evaluated. Results: This study provides insights into the integration of Agile and SPL approaches, it identifies the current gaps in the research, synthesize the available evidence and propose specific Agile methods and practices for integration in SPL. Conclusions: In general, few studies describe the underlying Agile principles being adopted by proposed Agile SPL solutions. The most common Agile practices proposed by the studies came from the XP and Scrum methods, particularly in the pro-active SPL strategy. We identify certain Agile methods that are being overlooked by the Agile SPL community, and propose specific SPL practices areas suitable for adoption of Agile practices. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper discusses the developments within a single case study, Intuit's Quickbooks product line that combined agile software development, design thinking and self‐organizing teams in a successful approach, which provided a significant improvement in terms of responsiveness and accuracy of building customer value.
Abstract: The ability to rapidly respond to customer interest and to effectively prioritize development effort has been a long-standing challenge for mass-market software intensive products. This problem is exacerbated in the context of software product lines as functionality may easily fall over software asset and organizational boundaries with consequent losses in efficiency and nimbleness. Some companies facing these problems in their product line respond with a new development process. In this paper we discuss the developments within a single case study, Intuit's Quickbooks product line that combined agile software development, design thinking and self-organizing teams in a successful approach, which provided a significant improvement in terms of responsiveness and accuracy of building customer value. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Assessment of the occurrence of architectural drift in the context of de novo software development, to characterize it, and to evaluate whether its detection leads to inconsistency removal illustrated that detection of inconsistencies was insufficient to prompt their removal, in the small, informal team context studied.
Abstract: Objectives: Software architecture is perceived as one of the most important artefacts created during a system's design. However, implementations often diverge from their intended architectures: a phenomenon called architectural drift. The objective of this research is to assess the occurrence of architectural drift in the context of de novo software development, to characterize it, and to evaluate whether its detection leads to inconsistency removal. Method: An in vivo, longitudinal case study was performed during the development of a commercial software system, where an approach based on Reflexion Modelling was employed to detect architectural drift. Observation and think-aloud data, captured during the system's development, were assessed for the presence and types of architectural drift. When divergences were identified, the data were further analysed to see if identification led to the removal of these divergences. Results: The analysed system diverged from the intended architecture, during the initial implementation of the system. Surprisingly however, this work showed that Reflexion Modelling served to conceal some of the inconsistencies, a finding that directly contradicts the high regard that this technique enjoys as an architectural evaluation tool. Finally, the analysis illustrated that detection of inconsistencies was insufficient to prompt their removal, in the small, informal team context studied. Conclusions: Although the utility of the approach for detecting inconsistencies was demonstrated in most cases, it also served to hide several inconsistencies and did not act as a trigger for their removal. Hence additional efforts must be taken to lessen architectural drift and several improvements in this regard are suggested. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The experience with automating parts of the FMEA process, using a model checker to automate the search for system‐level consequences of component failures and performance metrics for SAL model checking are presented.
Abstract: Failure Modes and Effects Analysis (FMEA) is a widely used system and software safety analysis technique that systematically identifies failure modes of system components and explores whether these failure modes might lead to potential hazards. In practice, FMEA is typically a labor-intensive team-based exercise, with little tool support. This article presents our experience with automating parts of the FMEA process, using a model checker to automate the search for system-level consequences of component failures. The idea is to inject runtime faults into a model based on the system specification and check if the resulting model violates safety requirements, specified as temporal logical formulas. This enables the safety engineer to identify if a component failure, or combination of multiple failures, can lead to a specified hazard condition. If so, the model checker produces an example of the events leading up to the hazard occurrence which the analyst can use to identify the relevant failure propagation pathways and co-effectors. The process is applied on three medium-sized case studies modeled with Behavior Trees. Performance metrics for SAL model checking are presented. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper introduces a new test case prioritization approach that maximizes the improvement of the diagnostic information per test and minimizes the loss of diagnostic quality in the prioritized test suite.
Abstract: During regression testing, test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as a combination of testing cost and debugging cost, on our benchmark set, the results of our test case prioritization approach show reductions of up to 60% of the overall combined cost of testing and debugging, compared with the next best technique. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: PTPSC is a probabilistic and timed extension of the existing scenario‐based specification formalism Property Sequence Chart and has defined a formal grammar‐based syntax and implemented a syntax‐directed translator that can automatically generate a Probabilistic monitor which combines timed B”uchi automata and a sequential statistical hypothesis test process.
Abstract: Run-time monitoring is an important technique to detect erroneous run-time behaviors. Several techniques have been proposed to automatically generate monitors from specification languages to check temporal and real-time properties. However, monitoring of probabilistic properties still requires manual generation. To overcome this problem, we define a formal property specification language called Probabilistic Timed Property Sequence Chart (PTPSC). PTPSC is a probabilistic and timed extension of the existing scenario-based specification formalism Property Sequence Chart (PSC). We have defined a formal grammar-based syntax and have implemented a syntax-directed translator that can automatically generate a probabilistic monitor which combines timed B”uchi automata and a sequential statistical hypothesis test process. We validate the generated monitors with a set of experiments performed with our tool WS-PSC Monitor. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This focus section contains papers related to agile software development that address a range of research areas including the application of agile methods to safety critical software development, the relationship of agile development with user experience design and how to measure flow in lean software development.
Abstract: This focus section contains papers related to agile software development. The papers address a range of research areas including the application of agile methods to safety critical software development, the relationship of agile development with user experience design and how to measure flow in lean software development. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper exploits semantics to apply context in run‐time adaptation, particularly for services in a user‐centered smart environment by adapting web semantic technologies to enable smarter and more proactive operation of context management systems.
Abstract: This paper exploits semantics to apply context in run-time adaptation, particularly for services in a user-centered smart environment. Context-sensitive services are usually focused on their own information without interoperation pretensions. It is necessary to enable common context models and systems in order to make context-aware applications interoperable. Moreover, context management systems need to implement mechanisms to support the dynamic behavior of the users and their surroundings, including techniques to adapt the model to their future needs, to maintain the context information at run-time and to be interoperable with external context models. By adapting web semantic technologies we can enable smarter and more proactive operation of context management systems. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The paper describes two families of heuristics and presents some experimental results which indicate that coloring is both efficient and tractable and that bidirectional coloring gives the best results.
Abstract: Late binding and subtyping create run-time overhead for object-oriented languages Dynamic typing and multiple inheritance create even more overhead Static typing and single inheritance lead to two major invariants, of reference and position, that make the implementation as efficient as possible Coloring is a technique that preserves these invariants for dynamic typing or multiple inheritance at minimal spatial cost Coloring has been independently proposed for method invocation under the name of selector coloring, for subtype tests under the name of pack encoding, and for attribute access and object layout This paper reviews a number of uses of coloring for optimizing object-oriented programming, generalizes them, and specifies several variations, such as bidirectional and n-directional coloring Coloring is NP-hard, hence compilers that use it depend on heuristics The paper describes two families of heuristics and presents some experimental results which indicate that coloring is both efficient and tractable and that bidirectional coloring gives the best results Copyright © 2010 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: This paper designs several test profiles for the notions of ‘ by exclusion’ and ‘by partitioning’, and uses these profiles to illustrate the new approach to generic random testing, which brings at least a higher failure‐detection capability or a lower computational overhead.
Abstract: Random testing (RT), which simply selects test cases at random from the whole input domain, has been widely applied to test software and assess the software reliability. However, it is controversial whether RT is an effective method to detect software failures. Adaptive random testing (ART) is an enhancement of RT in terms of failure-detection effectiveness. Its basic intuition is to evenly spread random test cases all over the input domain. There are various notions to achieve the goal of even spread, and each notion can be implemented by different algorithms. For example, ‘by exclusion’ and ‘by partitioning’ are two different notions to evenly spread test cases. Restricted random testing (RRT) is a typical algorithm for the notion of ‘by exclusion’, whereas the notion of ‘by partitioning’ can be implemented by either the technique of bisection (ART-B) or the technique of random partitioning (ART-RP). In this paper, we propose a generic approach that can be used to implement different notions. In the new approach, test cases are simply selected based on test profiles that are in turn designed according to certain notions. In this study, we design several test profiles for the notions of ‘by exclusion’ and ‘by partitioning’, and then use these profiles to illustrate our new approach. Our experimental results show that compared with the original RRT, ART-B, and ART-RP algorithms, our new approach normally brings at least a higher failure-detection capability or a lower computational overhead. Copyright © 2011 John Wiley & Sons, Ltd. (A preliminary version of this paper was presented at the 10th International Conference on Quality Software (QSIC 2010)).

Journal ArticleDOI
TL;DR: YouGen is presented, a new GBTG tool supporting many of the tags provided by previous tools, including covering‐array tags, which support a generalized form of pairwise testing and semantics for the YouGen tags using parse trees and a new construct, generation trees.
Abstract: Grammars are traditionally used to recognize or parse sentences in a language, but they can also be used to generate sentences. In grammar-based test generation (GBTG), context-free grammars are used to generate sentences that are interpreted as test cases. A generator reads a grammar G and generates L(G), the language accepted by the grammar. Often L(G) is so large that it is not practical to execute all of the generated cases. Therefore, GBTG tools support ‘tags’: extra-grammatical annotations which restrict the generation. Since its introduction in the early 1970s, GBTG has become well established: proven on industrial projects and widely published in academic venues. Despite the demonstrated effectiveness, the tool support is uneven; some tools target specific domains, e.g. compiler testing, while others are proprietary. The tools can be difficult to use and the precise meaning of the tags are sometimes unclear. As a result, while many testing practitioners and researchers are aware of GBTG, few have detailed knowledge or experience. We present YouGen, a new GBTG tool supporting many of the tags provided by previous tools. In addition, YouGen incorporates covering-array tags, which support a generalized form of pairwise testing. These tags add considerable power to GBTG tools and have been available only in limited form in previous GBTG tools. We provide semantics for the YouGen tags using parse trees and a new construct, generation trees. We illustrate YouGen with both simple examples and a number of industrial case studies. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper investigates the adaptation of TDD‐like practices for already‐implemented code, in particular legacy systems, and presents a TDM approach that assists software development and testing managers to use the limited resources they have for testing legacy systems efficiently.
Abstract: Test-driven development (TDD) is a software development practice that prescribes writing unit tests before writing implementation code. Recent studies have shown that TDD practices can significantly reduce the number of pre-release defects. However, most TDD research thus far has focused on new development. We investigate the adaptation of TDD-like practices for already-implemented code, in particular legacy systems. We call such an adaptation ‘Test-driven maintenance’ (TDM). In this paper, we present a TDM approach that assists software development and testing managers to use the limited resources they have for testing legacy systems efficiently. The approach leverages the development history of a project to generate a prioritized list of functions that managers should focus their unit test writing resources on. The list is updated dynamically as the development of the legacy system progresses. We evaluate our approach on two large software systems: a large commercial system and the Eclipse Open Source Software system. For both systems, our findings suggest that heuristics based on the function size, modification frequency and bug fixing frequency should be used to prioritize the unit test writing efforts for legacy systems. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis and a test suite augmentation technique to generate new test cases to cover the change‐related parts of the new model.
Abstract: Model-based testing helps test engineers automate their testing tasks so that they are more cost-effective When the model is changed because of the evolution of the specification, it is important to maintain the test suites up to date for regression testing A complete regeneration of the whole test suite from the new model, although inefficient, is still frequently used in the industry, including Microsoft To handle specification evolution effectively, we propose a test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis We also develop a test suite augmentation technique to generate new test cases to cover the change-related parts of the new model The experiment on four large protocol document testing projects shows that our technique can successfully identify a high percentage of reusable test cases and generate low-redundancy new test cases When compared with a complete regeneration of the whole test suite, our technique significantly reduces regression testing time while maintaining the stability of requirement coverage over the evolution of requirements specifications Copyright © 2011 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: The first adaptive worst‐case execution time (WCET)‐aware compiler framework for an automatic search of compiler optimization sequences that yield highly optimized code is proposed and demonstrated on real‐life benchmarks showing that standard optimization levels can be significantly outperformed.
Abstract: With the growing complexity of embedded systems software, high code quality can only be achieved using a compiler. Sophisticated compilers provide a vast spectrum of various optimizations to improve code aggressively w.r.t. different objective functions, e.g. average-case execution time (ACET) or code size. Owing to the complex interactions between the optimizations, the choice for a promising sequence of code transformations is not trivial. Compiler developers address this problem by proposing standard optimization levels, e.g. O3 or Os. However, previous studies have shown that these standard levels often miss optimization potential or might even result in performance degradation. In this paper, we propose the first adaptive worst-case execution time (WCET)-aware compiler framework for an automatic search of compiler optimization sequences that yield highly optimized code. Besides the objective functions ACET and code size, we consider the WCET which is a crucial parameter for real-time systems. To find suitable trade-offs between these objectives, stochastic evolutionary multi-objective algorithms identifying Pareto optimal solutions for the objectives 〈WCET, ACET 〉 and 〈WCET, code size 〉 are exploited. A comparison based on statistical performance assessments is performed that helps to determine the most suitable multi-objective optimizer. The effectiveness of our approach is demonstrated on real-life benchmarks showing that standard optimization levels can be significantly outperformed. Copyright © 2011 John Wiley & Sons, Ltd.