scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Software and Data Technologies in 2015"


Book ChapterDOI
20 Jul 2015
TL;DR: This paper presents a method based on the problem-based privacy analysis (ProPAn) that helps to elicit the needed information for a PIA systematically from a given set of functional requirements.
Abstract: Privacy-aware software development is gaining more and more importance for nearly all information systems that are developed nowadays. As a tool to force organizations and companies to consider privacy properly during the planning and the execution of their projects, some governments advise to perform privacy impact assessments (PIAs). During a PIA, a report has to be created that summarizes the consequence on privacy the project may have and how the organization or company addresses these consequences. As basis for a PIA, it has to be documented which personal data is collected, processed, stored, and shared with others in the context of the project. Obtaining this information is a difficult task that is not yet well supported by existing methods. In this paper, we present a method based on the problem-based privacy analysis (ProPAn) that helps to elicit the needed information for a PIA systematically from a given set of functional requirements. Our tool-supported method shall reduce the effort that has to be spent to elicit the information needed to conduct a PIA in a way that the information is as complete and consistent as possible.

13 citations


Book ChapterDOI
20 Jul 2015
TL;DR: An approach and toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems, comprising a visual modeling environment, a test execution engine, and a distributed test monitoring and control infrastructure is proposed.
Abstract: The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, several independent systems, forming a distributed and heterogeneous system of systems, are involved in the provisioning of end-to-end services to users. However, existing test automation techniques provide little tool support for properly testing such systems. Hence, we propose an approach and toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems, comprising a visual modeling environment, a test execution engine, and a distributed test monitoring and control infrastructure. The only manual activity required is the description of the participants and behavior of the services under test with UML sequence diagrams, which are translated to extended Petri nets for efficient test input generation and test output checking at runtime. A real world example from the Ambient Assisted Living domain illustrates the approach.

10 citations


Book ChapterDOI
20 Jul 2015
TL;DR: This paper applies SuperMod to a well-known case study, the Home Automation System product line, and learns that the tool supports a broad variety of iterative and incremental development processes, ranging from phase-structured to feature-driven.
Abstract: Software Product Line Engineering promises to increase the productivity of software development. In the literature, a plan-driven process has been established that is divided up into domain and application engineering. We argue that the strictly sequential order of its process activities implies several disadvantages such as increased complexity, late customer feedback, and duplicate maintenance. SuperMod is a novel model-driven tool based upon a filtered editing model oriented towards version control. The tool provides integrated support for domain and application engineering, offering an iterative and incremental style of development. In this paper, we apply SuperMod to a well-known case study, the Home Automation System product line. We learn that the tool supports a broad variety of iterative and incremental development processes, ranging from phase-structured to feature-driven. Furthermore, it can mitigate the disadvantages of the traditional software product line development process.

7 citations


Book ChapterDOI
20 Jul 2015
TL;DR: It is suggested that this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood.
Abstract: There is growing acknowledgement within the software engineering community that a theory of software development is needed to integrate the myriad methodologies that are currently popular, some of which are based on opposing perspectives. We have been developing such a theory for a number of years. In this paper, we overview our theory and report on a recent ontological analysis of the theory constructs. We suggest that, once fully developed, this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood. Such understanding would inevitably lead to greater predictability with respect to outcomes.

7 citations


Book ChapterDOI
20 Jul 2015
TL;DR: Assisting enterprises’ experts in the business process outsourcing to the Cloud decision is the focus of this paper, which extends the BPMN 2.0 language to explicitly support the specification of outsourcing concepts, and presents an automated approach to help decision makers identify those parts of their business process that benefit most from outsourcing toThe Cloud.
Abstract: Outsourcing enterprises’ data, business processes and applications to the Cloud is emerging as a major trend thanks to the Cloud offerings and features. Basically, enterprises expect when outsourcing to save cost, improve software and hardware performance and gain more flexibility by responding to the dynamic customers’ requirements. However, adopting the Cloud as an alternative environment for the management of the business processes leads to a radical change in the enterprise IT infrastructure. Furthermore, additional challenges may appear such as data security, vendor-lock-in and labor union rendering the outsourcing decision require a deep analysis and knowledge about the business processes context. Assisting enterprises’ experts in the business process outsourcing to the Cloud decision is the focus of this paper: it extends the BPMN 2.0 language to explicitly support the specification of outsourcing concepts, and it presents an automated approach to help decision makers identify those parts of their business process that benefit most from outsourcing to the Cloud. Using this extension helps also in identifying Cloud services considered as the most suitable to support the outsourced business process requirements.

6 citations


Book ChapterDOI
20 Jul 2015
TL;DR: This work proposes a new UML profile, baptized R-UML (Reconfigurable UML), to model reconfigurable systems, and presents an automatic translation of R- UML into R-TNCES, a Petri Net-based formalism, in order to support model checking.
Abstract: Unified Modeling Language (UML) is currently accepted as the standard for modeling software and control systems since it allows to highlight different aspects of the system under design. Nevertheless, UML lacks formal semantics and, hence, it is not possible to apply, directly, mathematical techniques on UML models in order to verify them. Furthermore, UML does not feature explicit semantics to model flexible control systems sharing adaptive shared resources either. Thus, this work proposes a new UML profile, baptized R-UML (Reconfigurable UML), to model such reconfigurable systems. R-UML is enriched with a PCP-based solution for the management of resource sharing. The paper also presents an automatic translation of R-UML into R-TNCES, a Petri Net-based formalism, in order to support model checking.

6 citations


Book ChapterDOI
20 Jul 2015
TL;DR: A new Eclipse-based IDE for teaching Java following the object-later approach that allows the programmer to write code in Java–, a smaller version of the Java language that does not include object-oriented features, and includes all the powerful features available when using an IDE like Eclipse.
Abstract: In this paper, we describe a new Eclipse-based IDE for teaching Java following the object-later approach. This IDE allows the programmer to write code in Java–, a smaller version of the Java language that does not include object-oriented features, and includes all the powerful features available when using an IDE like Eclipse (such as debugging, automatic building, and project wizards). With our implementation, it is also straightforward to create self-assessment exercises for students, which are integrated in Eclipse and JUnit.

5 citations


Book ChapterDOI
20 Jul 2015
TL;DR: This paper describes an approach for deriving behavior documentation from runtime tests in terms of UML interaction models by leveraging the structure of scenario-based runtime tests to render the resulting interaction models and diagrams tailorable by humans for a given task.
Abstract: Documenting system behavior explicitly using graphical models (e.g. UML activity or sequence diagrams) facilitates communication about and understanding of software systems during development and maintenance tasks. Creating graphical models manually is a time-consuming and often error-prone task. Deriving models from system-execution traces, however, suffers from resulting model sizes which render the models unmanageable for humans. This paper describes an approach for deriving behavior documentation from runtime tests in terms of UML interaction models. Key to our approach is leveraging the structure of scenario-based runtime tests to render the resulting interaction models and diagrams tailorable by humans for a given task. Each derived model represents a particular view on the test-execution trace. This way, one can benefit from tailored graphical models while controlling the model size. The approach builds on conceptual mappings (transformation rules) between a test-execution trace metamodel and the UML2 metamodel. In addition, we provide means to turn selected details of test specifications and of testing environment (i.e. test parts and call scopes) into views on the test-execution trace (scenario-test viewpoint). A prototype implementation called KaleidoScope based on a software-testing framework (STORM) and model transformations (Eclipse M2M/QVTo) is available.

5 citations


Book ChapterDOI
20 Jul 2015
TL;DR: This paper presents a novel approach of using an adaptive domain-specific modeling language (ADSML) to support the model-driven development of cross-platform mobile applications emphasizing the Android and iOS platforms.
Abstract: The use of domain-specific modeling languages (DSMLs) is a common approach to support cross-platform development of mobile applications. However, most DSML-based approaches suffer from a number of limitations such as poor performance. Furthermore, DSMLs that are written ab initio are not able to access the entire range of capabilities supported by the native mobile platforms. This paper presents a novel approach of using an adaptive domain-specific modeling language (ADSML) to support the model-driven development of cross-platform mobile applications emphasizing the Android and iOS platforms. We will discuss the techniques in the design of an ADSML including meta-model extraction, meta-model elevation, and meta-model alignment. We discuss how these techniques can be incorporated into an automated process where a common, platform-independent DSML is dynamically synthesized from the native APIs of multiple target mobile platforms. Our approach is capable of generating high performance native applications; is able to access the full capabilities of the target native platforms; and is adaptable to the rapid evolutions of those platforms.

4 citations


Book ChapterDOI
20 Jul 2015
TL;DR: This paper demonstrates how QVTo model transformations can be described and designed informally through the mathematical notation of set theory and functions, and formulates two design principles of developingQVTo transformations: structural decomposition and chaining model transformations.
Abstract: Model transformations play an essential role in Model Driven Engineering (MDE), as they provide the means to use models as first-class artifacts in the software development process. While there exist a number of languages specifically designed to program model transformations, the practical challenges of documenting and designing model transformations are hardly addressed. In this paper we demonstrate how QVTo model transformations can be described and designed informally through the mathematical notation of set theory and functions. We align the QVTo concepts with the mathematical concepts, and, building on the latter, we formulate two design principles of developing QVTo transformations: structural decomposition and chaining model transformations.

3 citations


Book ChapterDOI
20 Jul 2015
TL;DR: In this paper, the authors propose a technique to annotate goals with the concerns they have in order to support the understanding of goal refinement, where goals are refined into sub goals referring to the annotated concerns, and these concerns annotated to a goal and its sub goals provide the meaning of its goal refinement.
Abstract: In goal-oriented requirements analysis, goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal refinements. Separating concerns and specifying them explicitly is one of the useful approaches to improve the understandability of goal refinements, i.e., the relations between goals and their sub goals. In this paper, we propose a technique to annotate goals with the concerns they have in order to support the understanding of goal refinement. In our approach, goals are refined into sub goals referring to the annotated concerns, and these concerns annotated to a goal and its sub goals provide the meaning of its goal refinement. By tracing and focusing on the annotated concerns, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach.

Book ChapterDOI
20 Jul 2015
TL;DR: A set of tools which guides the developers of Cloud applications in key steps to capture energy goals in a measurable way and relate them with important Non-Functional Requirements (NFR) and KPI are presented.
Abstract: ICT energy efficiency is a growing concern. A great effort was already done making hardware more energy efficient and aware. Although a part of that effort is devoted to specific software areas like embedded/mobile systems, much remains to be done at the software level, especially for applications deployed in the Cloud. There is a increasing need to help Cloud application developers to learn to reason about how much energy is consumed by their applications on the server-side. This paper presents a set of tools which guides the developers of Cloud applications in key steps. First, at requirements stage, in order to capture energy goals in a measurable way and relate them with important Non-Functional Requirements (NFR). Second, at design level, an UML profile supporting energy Key Performance Indicators (KPI) is used in order to keep tracking off those goals and metrics across the functional design of the application. Third, at runtime, measurements probes are automatically deployed and the collected data is processed in order to be analysed at the previously goal level. Specific tools for analysing the energy behaviour and helping in making a choice among different design alternatives are also proposed.

Book ChapterDOI
20 Jul 2015
TL;DR: The proposed process provides an easy and well supported path to the definition and implementation of effective KPI and project success indicators and was applied in the evaluation of the research project MUSES.
Abstract: KPI (Key Process Indicators) are usually defined very early in the project’s life, when little details about the project are known. Moreover, the definition of KPI does not always follow a systematic and effective methodology. As a result, KPI and project success indicators are often defined in a rather generic and imprecise manner. We need to precisely define KPI and project success indicators, guarantee that the data upon which they are based can be effectively and efficiently measured, and assure that the computed indicators are adequate with respect to project objectives, and represent the viewpoints of all the involved stakeholders. In this paper a complete and coherent process for managing KPI and success indicators lifecycle is proposed. The process is instrumented by well integrated techniques and tools, including the Goal/Question/Metrics (GQM) method for the definition of measures and the R statistic language and environment for analyzing data and computing indicators. The proposed process was applied in the evaluation of the research project MUSES. The MUSES case study shows that the proposed process provides an easy and well supported path to the definition and implementation of effective KPI and project success indicators.

Book ChapterDOI
20 Jul 2015
TL;DR: A set of architectural abstractions aimed at representing sensors’ measurements that are independent from the sensors' technology are proposed, which can reduce the effort for data fusion and interpretation and enforces both the reuse of existing infrastructure and the openness of the sensing layer by providing a common framework for representing sensor’ readings.
Abstract: The growing use of sensors in smart environments applications like smart homes, hospitals, public transportation, emergency services, education, and workplaces not only generates constantly increasing of sensor data, but also rises the complexity of integration of heterogeneous data and hardware devices. Existing infrastructures should be reused under different application domain requirements, applications should be able to manage data coming from different devices without knowing the intrinsic characteristics of the sensing devices, and, finally, the introduction of new devices should be completely transparent to the existing applications. The paper proposes a set of architectural abstractions aimed at representing sensors’ measurements that are independent from the sensors’ technology. Such a set can reduce the effort for data fusion and interpretation, moreover it enforces both the reuse of existing infrastructure and the openness of the sensing layer by providing a common framework for representing sensors’ readings. The abstractions rely on the concepts of space. Data is localized both in a positing and in a measurement space that are subjective with respect to the entity that is observing the data. Mapping functions allow data to be mapped into different spaces so that different entities relying on different spaces can reason on data.

Book ChapterDOI
20 Jul 2015
TL;DR: This paper presents the methodology and suite of tools developed that helps with the reverse engineering and understanding of mobile apps and the techniques developed to make educated guesses about the role and structure of the classes that make up the app.
Abstract: Nowadays mobile applications have moved to mainstream. Service companies such as IBM advise us to develop on the “Mobile First”. Although earlier mobile apps were simple data access front ends, today’s apps are quite complex. Therefore the same problem of code maintenance and comprehension of poorly documented apps, as in the desktop world, happen to the mobile today. Hence we need techniques to reverse engineer mobile applications starting from the mere source code. In this paper we present the methodology and suite of tools we developed that helps with the reverse engineering and understanding of mobile apps. The performance of these tools is demonstrated on two case studies of iPhone applications. The contribution of the paper is to show how dynamic analysis techniques can be applied to mobile applications and the techniques we develop to make educated guesses about the role and structure of the classes that make up the app.

Book ChapterDOI
20 Jul 2015
TL;DR: This paper proposes to integrate a Model Driven based Design Pattern mining approach with a Formal Method technique to automatically refine and improve the precision of results of traditional mining tool.
Abstract: The use of Design Patterns has constantly grown in the development of Object Oriented systems, due to the well-known advantage they offer to improve the quality of software design. However, lack of documentation about which Design Patterns are actually adopted and implemented in the code and about the code components involved in the implementation of each Design Pattern instance can make harder any operation of maintenance, reuse, or evolution impacting those components. Thus, several Design Pattern Mining approaches and tools have been proposed to identify the instances of Design Pattern implemented in an Object oriented system. Nevertheless, the results produced by these approaches can be not fully complete and precise because of the presence of false positive/negative. In this paper we propose to integrate a Model Driven based Design Pattern mining approach with a Formal Method technique to automatically refine and improve the precision of results of traditional mining tool. In particular Model checking is used to refine the results of the Design Pattern Finder (DPF) tool implementing a Model Driven based approach to detect Design Pattern instances Object Oriented systems. To verify and validate the feasibility and effectiveness of the proposed approach we carried out a case study regarding four open source OO systems. The results from the case study showed that actually the technique allowed to raise significantly the precision of the instances that the DPF tool was able to identify.

Book ChapterDOI
20 Jul 2015
TL;DR: An analysis of knowledge difficult to protect of software development projects finds that SDP success is largely an uncertainty problem between the contractual parties on the management level, and thus technical-organizational approaches alone are inadequate for achieving success.
Abstract: In software development projects (SDP), both the supplier and the customer must share their business knowledge for reaching the project success. However, this business knowledge is an essential intellectual property, and thus needs protection from misuse. In this paper, we present an analysis of knowledge difficult to protect. We enact a strategy to achieve SDPs success despite these barriers. Our theoretical and empirical analysis also found that SDP success is largely an uncertainty problem between the contractual parties on the management level, and thus technical-organizational approaches alone are inadequate for achieving success. Based on property rights theory, we introduce two models for protecting knowledge depending on uncertainties. Our findings offer managers important insights how they can design and enact especially fixed-price contracts. Moreover, we show how the economic theories can enhance understanding of SDP dynamics and advance the development of a theory of effective control of SDP success.

Book ChapterDOI
20 Jul 2015
TL;DR: An offline approach to analyzing feature interactions in embedded systems is presented, first specified in terms of predicates, before being refined to timed automata.
Abstract: This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation.

Book ChapterDOI
20 Jul 2015
TL;DR: This paper proposes two classification frameworks that highlight how existing approaches deal with deviations from two different axes: detection and correction and gives an insight about what has been left by the existing approaches and worth to be considered, further.
Abstract: Software Process (SP) models are the results of the efforts deployed by the software Engineering community to guarantee an advanced level of the SP quality. However, experience has shown that SP agents often deviate from these models to cope with new environments’ challenges. Unfortunately, the appearance of such situations, if not controlled, often lead to the process failure. Since the 90s, several research works have been conducted to handle this problem. Through this paper, we aim at gathering these approaches around a single classification that puts in advance their strengths and their weaknesses. To achieve this goal, we propose two classification frameworks that highlight how existing approaches deal with deviations from two different axes: detection and correction. As a result of this classification, a covering graph is drawn for each framework, which gives an insight about what has been left by the existing approaches and worth to be considered, further. Finally, we introduce briefly the general outlines of a new contribution that we are currently working on to face the shortcomings of the existing approaches.

Book ChapterDOI
20 Jul 2015
TL;DR: A solution which automatically translates the OCL invariants into aspect code able to check them incrementally after the execution of a Unit of Work getting good performance, a clean integration with programmers’ code being respectful with the original design and easily combined with atomic all-or-nothing contexts.
Abstract: Constraints for rich domain models are easily specified with the Object Constraint Language (OCL) at model level, but hard to translate into executable code. We propose a solution which automatically translates the OCL invariants into aspect code able to check them incrementally after the execution of a Unit of Work getting good performance, a clean integration with programmers’ code being respectful with the original design and easily combined with atomic all-or-nothing contexts (data base transactions, STM, ORM, etc.). The generated code solves some difficult issues: how to implement, when to check, over what objects and what to do in case of a violation.

Book ChapterDOI
David Budgen1
20 Jul 2015
TL;DR: The ways in which empirical practices, and evidence-based studies in particular, have begun to provide more systematic sources of evidence about what practices work, when, and why are reviewed.
Abstract: Context: The ‘prescriptions’ used in software engineering for developing and maintaining systems make use of a set of ‘practice models’, which have largely been derived by codifying successful experiences of expert practitioners. Aim: To review the ways in which empirical practices, and evidence-based studies in particular, have begun to provide more systematic sources of evidence about what practices work, when, and why. Method: My presentation will review the current situation regarding empirical studies in software engineering and examine some of the ways in which evidence-based studies can inform and influence practice. Results: These will be taken from a mix of secondary and tertiary studies. Conclusion: When compared with other disciplines that have become more ‘evidence-informed’, the knowledge base for software engineering still needs considerable refinement. However, outcomes so far are encouraging, and indicate that in the future we can expect evidence-based research to play a larger role in informing practice, standards and teaching.

Book ChapterDOI
20 Jul 2015
TL;DR: A Version Broker Service that enables consistent management of dynamic digital resources throughout their life cycle and can manage changes consistently, in a sound manner, for both perspectives, all potential users, and change cases is described.
Abstract: This work describes a Version Broker Service that enables consistent management of dynamic digital resources throughout their life cycle. The service handles the association of resources with logical specifications formally expressed using an extensible logical language understood and agreed by tiers. A new version of a digital resource is considered certified only if the resource owner is able to formally prove that the new version satisfies the logical specifications, with the help of the service. A method is also described to both use formal proofs for qualifying changes (occurring either on the resource content or on the corresponding specifications), and for characterizing them through the evolution of version labels. While the resource owners may handle a fully detailed specification (called internal), the users may have a simplified view of the same resource, i.e. a particular external specification. The service we propose can manage changes consistently, in a sound manner, for both perspectives, all potential users, and change cases.

Book ChapterDOI
20 Jul 2015
TL;DR: A model-based SPL approach for FSW SPLs that manages variability at a higher level of granularity using executable software architectural design patterns and requires less modeling during SPL engineering but more modeling at the application engineering phase is presented.
Abstract: The unmanned space flight software (FSW) domain contains a significant amount of variability within its required capabilities. Because of the large degree of architectural variability in FSW, it is difficult to develop a FSW software product line (SPL) architecture that covers all possible variations. In order to address this challenge, this paper presents a model-based SPL approach for FSW SPLs that manages variability at a higher level of granularity using executable software architectural design patterns and requires less modeling during SPL engineering but more modeling at the application engineering phase. The executable design patterns are tailored to individual FSW applications during application engineering. The paper describes in detail the application and validation of this approach to FSW.