scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Software and Data Technologies in 2018"


Proceedings Article
10 Sep 2018
TL;DR: This paper discusses and applies novel strategies that allow systems to dynamically adapt at runtime that allow achieving a certain Quality-of-Service while sticking to a given energy limit.
Abstract: Software development for mobile systems is becoming increasingly complex. Beneath enhanced functionality, resource scarcity of devices is a major reason. The relatively high energy requirements of such systems are a limiting factor due to reduced operating times. Reducing energy consumption of mobile devices in order to prolong their operation time has thus been an interesting research topic in past years. Interestingly the focus has mostly been on hardware optimization, energy profiles, or techniques such as “Micro-Energy Harvesting“. Only recently, the impact of software on energy consumption by optimizing the use of resources has moved into the center of attention. Extensive wireless data transmissions, that are expensive, slow, and energy intensive can for example be reduced if mobile clients locally cache received data. Unfortunately, optimization at compile time is often inefficient since the optimal use of existing resources cannot really be foreseen. This paper discusses and applies novel strategies that allow systems to dynamically adapt at runtime. The focus is on resource substitution strategies that allow achieving a certain Quality-of-Service while sticking to a given energy limit.

22 citations



Proceedings ArticleDOI
01 Jan 2018
TL;DR: This paper proposes a model that can be deployed on a cloud platform for software development companies to use and can be distributed as a web service worldwide, thus providing Bug Prediction as a Service (BPaaS).
Abstract: The presence of bugs in a software release has become inevitable. The loss incurred by a company due to the presence of bugs in a software release is phenomenal. Modern methods of testing and debugging have shifted focus from “detecting” to “predicting” bugs in the code. The existing models of bug prediction have not been optimized for commercial use. Moreover, the scalability of these models has not been discussed in depth yet. Taking into account the varying costs of fixing bugs, depending on which stage of the software development cycle the bug is detected in, this paper uses two approaches - one model which can be employed when the 'cost of changing code' curve is exponential and the other model can be used otherwise. The cases where each model is best suited are discussed. This paper proposes a model that can be deployed on a cloud platform for software development companies to use. The model in this paper aims to predict the presence or absence of a bug in the code, using machine learning classification models. Using Microsoft Azure's machine learning platform this model can be distributed as a web service worldwide, thus providing Bug Prediction as a Service (BPaaS).

14 citations


Proceedings Article
29 Aug 2018
TL;DR: A study on refactoring an software application that contains artifacts of different languages that may break because the interaction of this artifact with an artifact written in another programming language may break.
Abstract: Different programming languages can be involved in the implementation of a single software application. In these software applications, source code of one programming language interacts with code of a different language. By refactoring an artifact of one programming language, the interaction of this artifact with an artifact written in another programming language may break. We present a study on refactoring an software application that contains artifacts of different languages.

13 citations


Proceedings ArticleDOI
01 Jan 2018

13 citations


Proceedings Article
10 Aug 2018
TL;DR: This paper introduces the PRINCE model that consists of 3+1 types of requirements elicitation processes based on the time of the maturation of the requirements elicit activity to elicit requirements by need of the physical development, rather than the theoretical development process model.
Abstract: Requirements changes are sometimes pointed out as being one of the causes for project failure. Our solution to cope with this problem is to plan and manage requirements changes strategically. This paper introduces the PRINCE model that consists of 3+1 types of requirements elicitation processes based on the time of the maturation of the requirements elicitation activity. To explain the model, we show a real case with quantitative observations of a requirements elicitation process. When we are able to elaborate a strategy of requirements elicitation with the PRINCE model, we can elicit requirements by need of the physical development, rather than the theoretical development process model.

12 citations


Proceedings ArticleDOI
01 Jan 2018

11 citations


Proceedings ArticleDOI
26 Jul 2018
TL;DR: This paper discusses the blend of cognitive computing with the Internet-of-Things that should result into developing cognitive things.
Abstract: This paper discusses the blend of cognitive computing with the Internet-of-Things that should result into developing cognitive things. Today’s things are confined into a data-supplier role, which d ...

10 citations






Proceedings Article
30 Mar 2018
TL;DR: In this article, the authors present a multimodeling methodology that uses a CommonKADS conceptual model to interpret the diagnosis knowledge with the aim of representing the system with three models: a structural model, a functional model and a behavioural model.
Abstract: Keywords: Modeling, Model Based Diagnosis, Dynamic Systems, Conceptual Model Abstract: This paper presents the basis of a multimodeling methodology that uses a CommonKADS conceptual model to interpret the diagnosis knowledge with the aim of representing the system with three models: a structural model describing the relations between the components of the system, a functional model describing the relations between the values the variables of the system can take (i.e. the functions) and a behavioural model describing the states of the system and the discrete events firing the state transitions. The relation between these models is made with the notion of variable: a variable used in a function of the functional model is associated with an element of the structural model and a discrete event is defined as the affectation of a value to a variable. This methodology is presented in this paper with a toy but pedagogic problem: the technical diagnosis of a car. The motivating idea is that using the same level of abstraction that the expert can facilitate the problem solving reasoning.

Proceedings Article
19 Apr 2018
TL;DR: The proposed incremental and iterative process, fosters an agile approach of refactoring and optimization is based on the assumptions that services change their QARCC characteristics over time due to emerging opportunities for replacement of sub-components.
Abstract: An enterprise that exploits its IT-services from the cloud, and optionally provides some of the services to its customers via the cloud, is defined by us as cloud-connected enterprise (CCE). Consumption from the cloud and provisioning to the cloud of IT services defines an IT supply-chain environment. Considering the conceptual similar offerings from different vendors is economical attractive, as specialization in services increases the quality and cost-effectiveness of the service. The overall value of a service is composed of characteristics that may be summarized as QARCC: Quality, Agility, Risk, Capability and Cost. Tradeoffs between implementing services internally and consuming services externally may depend on these characteristics and their sub-characteristics. Regardless of the origin of the services or sub-services, we propose that the construction or consumption of the solution should follow dedicated cloud-oriented lifecycle for managing such services. The proposed incremental and iterative process, fosters an agile approach of refactoring and optimization. It is based on the assumptions that services change their QARCC characteristics over time due to emerging opportunities for replacement of sub-components. It is designed to operate in internal clouds as well as external and hybrid ones.

Proceedings Article
10 Apr 2018
TL;DR: This paper presents a quantitative study of the performance of the trace segmentation technique with respect to the chosen parameters of the method and defines a clustering quality metrics to identify the parameters providing the best results.
Abstract: Reverse-engineering methods using dynamic techniques rests on the post-mortem analysis of the execution trace of the programs. However, one key problem is to cope with the amount of data to process. In fact, such a file could contain hundreds of thousands of events. To cope with this data volume, we recently developed a trace segmentation technique. This lets us compute the correlation between classes and identify cluster of closely correlated classes. However, no systematic study of the quality of the clusters has been conducted so far. In this paper we present a quantitative study of the performance of our technique with respect to the chosen parameters of the method. We then highlight the need for a benchmark and present the framework for the study. Then we discuss the matching metrics and present the results we obtained on the analysis of two very large execution traces. Finally we define a clustering quality metrics to identify the parameters providing the best results.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: A novel solution that combines both hierarchical clustering and time series forecasting on the basis of the classical theory of market segmentation is proposed, which is much more effective, flexible, measurable and practical for CSPs to implement their cloud market strategies by rolling out different pricing models.
Abstract: The topics of cloud pricing models and resources management have been receiving enormous attention recently. However, very few studies have considered the importance of cloud market segmentation. Moreover, there is no a better, practical and quantifiable solution for a cloud service providers (CSP) to segment cloud market. We propose a novel solution that combines both hierarchical clustering and time series forecasting on the basis of the classical theory of market segmentation. In comparison with some traditional approaches, such as nested, analytic, Delphi, and strategy-based approaches, our method is much more effective, flexible, measurable and practical for CSPs to implement their cloud market strategies by rolling out different pricing models. Our tested results and empirical analysis show that our solution can efficiently segment cloud markets and also predict the market demands. Our primary goal is to offer a new solution so that CSPs can tail its limited cloud resources for its targeted market or cloud customers.

Proceedings Article
08 Aug 2018
TL;DR: ONTOSERS-DM is introduced, a domain model that specifies the common and variable requirements of Recommender Systems based on the ontology technology of the Semantic Web, using three information filtering approaches: content-based, collaborative and hybrid filtering.
Abstract: The huge amount of data available on the Web and its dynamic nature is the source of an increasing demand of information filtering applications such as recommender systems. The lack of semantic structure of Web data is a barrier for improving the effectiveness of this kind of applications. This paper introduces ONTOSERS-DM, a domain model that specifies the common and variable requirements of Recommender Systems based on the ontology technology of the Semantic Web, using three information filtering approaches: content-based, collaborative and hybrid filtering. ONTOSERS-DM was modeled under the guidelines of MADEM, a methodology for Multi-Agent Domain Engineering, using the ONTOMADEM

Proceedings Article
26 Aug 2018
TL;DR: The present contribution will focus on the systematic construction of benchmarks used for the evaluation of resource planning systems, used to evaluate the resource management system GORBA and the optimization strategies for resource planning applied in this system.
Abstract: Keywords: Resource Management System, Resource Broker, Evolutionary Algorithm Abstract: The present contribution will focus on the systematic construction of benchmarks used for the evaluation of resource planning systems. Two characteristics for assessing the complexity of the benchmarks were developed. These benchmarks were used to evaluate the resource management system GORBA and the optimization strategies for resource planning applied in this system. At first, major aspects of GORBA, in particular two-step resource planning, will be described briefly, before the different classes of benchmarks will be defined. With the help of these benchmarks, GORBA was evaluated. The evaluation results will be presented and conclusions drawn. The contribution shall be completed by an outlook on further activities.

Proceedings ArticleDOI
26 Jul 2018
TL;DR: The approach proposed in this paper advocates for 2 types of mutation, active and passive, along with a set of policies that either back or deny mutation based on specific “stopovers” referred to as permission, prohibition, dispensation, and obligation.
Abstract: This paper discusses mutation as a new way for making things, in the context of Internet-of-Things (IoT), active instead of being passive as reported in the ICT literature. IoT is gaining momentum among ICT practitioners who see a lot of benefits in using things to support users have access to and control over their surroundings. However, things are still confined into the limited role of data suppliers. The approach proposed in this paper advocates for 2 types of mutation, active and passive, along with a set of policies that either back or deny mutation based on specific “stopovers” referred to as permission, prohibition, dispensation, and obligation. A testbed and a set of experiments demonstrating the technical feasibility of the mutation approach, are also presented in the paper. The testbed uses NodeMCU firmware and Lua script interpreter.




Proceedings Article
31 Mar 2018
TL;DR: By examining the state and characteristics of the Grid, the hyper-heuristic is able to find the planning of jobs to Grid resources that minimizes both the makespan and flowtime of the system.
Abstract: We will present the design and implementation of an hyper-heuristic for efficiently scheduling independent jobs in Computational Grids. An efficient scheduling of jobs to Grid resources depends on many parameters, among others, the characteristics of the Grid infrastructure as well as on job characteristics (such as computing capacity, consistency of computing, etc.). Existing ad hoc scheduling methods (batch and immediate mode) have shown their efficacy for certain types of Grids and job characteristics. However, they are not able to match the best Grid and job configuration while scheduling arriving jobs in the Grid system. In this work we have designed and implemented a hyper-heuristic that uses a set of ad hoc (immediate and batch mode) scheduling methods to provide the best scheduling of jobs to Grid nodes according to the Grid and job characteristics. By examining the state and characteristics of the Grid, the hyper-heuristic is able to find the planning of jobs to Grid resources that minimizes both the makespan and flowtime of the system. The Hyper-heuristic has been tested and evaluated using a standard benchmark of instances as well as a prototype of a simulator.


Proceedings Article
10 Sep 2018
TL;DR: In this paper, the authors describe an approach for improving information flow throughout the software lifecycle via the (semi-)automated realization of abstract software-lifecycle processes and workflows in combination with Semantic Web technologies.
Abstract: For comprehensive software lifecycle processes, a trichotomy continues to subsist between the software development processes, enterprise IT processes, and the software runtime environment. Currently, integrating software lifecycle processes requires substantial effort, and the information needed for the execution of (semi-)automated software lifecycle workflows is not readily accessible and is typically scattered across semantically heterogeneous sources. Consequently, an interrupted flow of information ensues between the development/maintenance phases and operational phases in the software lifecycle, resulting in ignorance, inefficiencies, and suboptimal product quality and support levels. Furthermore, today’s abstract IT (e.g., ITIL) and software processes are often derived into concrete processes and workflows manually, causing errors, extensive effort, and limiting widespread adoption of best practices. This paper describes an approach for improving information flow throughout the software lifecycle via the (semi-)automated realization of abstract software lifecycle processes and workflows in combination with Semantic Web technologies.



Proceedings ArticleDOI
01 Jan 2018
TL;DR: Whether (and to what extent) UML diagrams can be used for identifying and assessing design smells and their representability in UML class and sequence diagrams is investigated.
Abstract: Deficiencies in software design or architecture can severely impede and slow down the software development and maintenance progress. Bad smells and anti-patterns can be an indicator for poor software design and suggest for refactoring the affected source code fragment. In recent years, multiple techniques and tools have been proposed to assist software engineers in identifying smells and guiding them through corresponding refactoring steps. However, these detection tools only cover a modest amount of smells so far and also tend to produce false positives which represent conscious constructs with symptoms similar or identical to actual bad smells (e.g., design patterns). These and other issues in the detection process demand for a code or design review in order to identify (missed) design smells and/or re-assess detected smell candidates. UML diagrams are the quasi-standard for documenting software design and are often available in software projects. In this position paper, we investigate whether (and to what extent) UML diagrams can be used for identifying and assessing design smells. Based on a description of difficulties in the smell detection process, we discuss the importance of design reviews. We then investigate to what extent design documentation in terms of UML2 diagrams allows for representing and identifying software design smells. In particular, 14 kinds of design smells and their representability in UML class and sequence diagrams are analyzed. In addition, we discuss further challenges for UML-based identification and assessment of bad smells.

Proceedings Article
02 Sep 2018
TL;DR: The work on aidedRefactoring construction and evolution based on declarative definition of refactoring operations based on frameworks, XML and reflective programming is described, easing migration from one programming language to another, and bringing rational support for multilanguage development environments.
Abstract: Current available refactoring tools, even stand-alone or integrated in development environments, offer a static set of refactoring operations. Users (developers) can run these refactorings on their source codes, but they can not adjust, enhance, evolve them or even increase the refactoring set in a smooth way. Refactoring operations are hand coded using some support libraries. The problem of maintaining or enriching the refactoring tools and their libraries are the same of any kind of software, introducing complexity dealing with refactoring, managing and transforming software elements, etc. On the other hand, available refactoring tools are mainly language dependent, thus the effort to reusing refactoring implementations is enormous, when we change the source code programming language. This paper describes our work on aided refactoring construction and evolution based on declarative definition of refactoring operations. The solution is based on frameworks, XML and reflective programming. Certain language independence is also achieved, easing migration from one programming language to another, and bringing rational support for multilanguage development environments.

Proceedings ArticleDOI
01 Aug 2018
TL;DR: A novel approach is proposed which leverages information from a compiler-generated AST to provide the quality of call graph necessary, while the program itself is written using an Island Grammar that parses the AST providing the lightweight aspect necessary.
Abstract: Analysis of multilingual codebases is a topic of increasing importance. In prior work, we have proposed the MLSA (MultiLingual Software Analysis) architecture, an approach to the lightweight analysis of multilingual codebases, and have shown how it can be used to address the challenge of constructing a single call graph from multilingual software with mutual calls. This paper addresses the challenge of constructing monolingual call graphs in a lightweight manner (consistent with the objective of MLSA) which nonetheless yields sufficient information for resolving language interoperability calls. A novel approach is proposed which leverages information from a compiler-generated AST to provide the quality of call graph necessary, while the program itself is written using an Island Grammar that parses the AST providing the lightweight aspect necessary. Performance results are presented for a C/C++ implementation of the approach, PAIGE (Parsing AST using Island Grammar Call Graph Emitter) showing that despite its lightweight nature, it outperforms Doxgen, is robust to changes in the (Clang) AST, and is not restricted to C/C++.