scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Software and Data Technologies in 2017"


Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper prepared a set of use cases to evaluate the proposed design decisions and to demonstrate the key features of the KYPO cyber range, and describes the design decisions made during it’s development.
Abstract: The physical and cyber worlds are increasingly intertwined and exposed to cyber attacks. The KYPO cyber range provides complex cyber systems and networks in a virtualized, fully controlled and monitored environment. Time-efficient and cost-effective deployment is feasible using cloud resources instead of a dedicated hardware infrastructure. This paper describes the design decisions made during it’s development. We prepared a set of use cases to evaluate the proposed design decisions and to demonstrate the key features of the KYPO cyber range. It was especially cyber training sessions and exercises with hundreds of participants which provided invaluable feedback for KYPO platform development.

67 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: This work proposes a new task allocation process using deep reinforcement learning that allows cooperating agents to act automatically and learn how to communicate with other neighboring agents to allocate tasks and share resources.
Abstract: The task allocation problem in a distributed environment is one of the most challenging problems in a multiagent system. We propose a new task allocation process using deep reinforcement learning that allows cooperating agents to act automatically and learn how to communicate with other neighboring agents to allocate tasks and share resources. Through learning capabilities, agents will be able to reason conveniently, generate an appropriate policy and make a good decision. Our experiments show that it is possible to allocate tasks using deep Q-learning and more importantly show the performance of our distributed task allocation approach.

30 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community is built.
Abstract: Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle costeffective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly contextdependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.

13 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: An APLE methodology called AgiFPL: Agile Framework for managing evolving PL with the intention to address the deficiencies identified in current methods, while making use of their advantages is proposed.
Abstract: Integrating Agile Software Development (ASD) with Software Product Line Engineering (PLE) has resulted in proposing Agile Product Line Engineering (APLE). The goal of combining both approaches is to overcome the weaknesses of each other while maximizing their benefits. However, combining them represents a big challenge in software engineering. Several methods have been proposed to provide a practical process for applying APLE in organizations, but none covers all the required APLE features. This paper proposes an APLE methodology called AgiFPL: Agile Framework for managing evolving PL with the intention to address the deficiencies identified in current methods, while making use of their advantages.

11 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: The main features, potential advantages and current limitations of the main tools that exist for the development of graphical editors for visual DSLs are reviewed.
Abstract: Visual Domain Specific Languages play a fundamental role in the development of model-driven software. The increase in this type of visual languages and the inherent complexity as regards the development of graphical editors for them has, in recent years, led to the emergence of several tools that provide technical support for this task. Most of these tools are based on the use of models and increase the level of automation of software development, which are the basic principles of Model Driven Engineering. This paper therefore reviews the main features, potential advantages and current limitations of the main tools that exist for the development of graphical editors for visual DSLs.

9 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper adopts metrics-based assessment to evaluate the quality of business process models, modeled with Business Process Modeling and Notation (BPMN), in terms of their comprehensibility and modifiability, and proposes a fuzzy logic-based approach that uses existing quality metrics for assessing the attainment level of these two quality characteristics.
Abstract: Similar to software products, the quality of a Business Process model is vital to the success of all the phases of its lifecycle. Indeed, a high quality BP model paves the way to the successful implementation, execution and performance of the business process. In the literature, the quality of a BP model has been assessed through either the application of formal verification, or most often the evaluation of quality metrics calculated in the static and/or simulated model. Each of these assessment means addresses different quality characteristics and meets particular analysis needs. In this paper, we adopt metrics-based assessment to evaluate the quality of business process models, modeled with Business Process Modeling and Notation (BPMN), in terms of their comprehensibility and modifiability. We propose a fuzzy logic-based approach that uses existing quality metrics for assessing the attainment level of these two quality characteristics. By analyzing the static model, the proposed approach is easy and fast to apply. In addition, it overcomes the threshold determination problem by mining a repository of BPMN models. Furthermore, by relying on fuzzy logic, it resembles human reasoning during the evaluation of the quality of business process models. We illustrate the approach through a case study and its tool support system developed under the eclipse framework. The preliminary experimental evaluation of the proposed system shows encouraging results.

8 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: Modelling of cyber-physical systems by 2HMD approach gives an opportunity to transparently compose and analyse system components to be provided and components actually provided, and, thus, to identify and fill the gaps between desirable and actual system content.
Abstract: The Two-hemisphere model-driven (2HMD) approach assumes modelling and use of procedural and conceptual knowledge on an equal and related basis. This differentiates 2HMD approach from pure procedural, pure conceptual, and object oriented approaches. The approach may be applied in the context of modelling of a particular business domain as well as in the context of modelling the knowledge about the domain. Cyber-physical systems are heterogeneous systems, which require multi-disciplinary approach to their modelling. Modelling of cyber-physical systems by 2HMD approach gives an opportunity to transparently compose and analyse system components to be provided and components actually provided, and, thus, to identify and fill the gaps between desirable and actual system content.

8 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A solution for real-time monitoring of vehicles and detection of rising levels of carbon emissions, called EcoLogic, which consists of hardware module that collects sensor data related to vehicles’ carbon emissions and cloud based applications for data processing, analysis and visualisation.
Abstract: Today, the sensors and the Internet of Things (IoT) presence naturally in the people’s lives. Billions of interactive devices exchange information about variety of objects in the physical world. The IoT technologies affect the business processes of all major industries such as transportation, manufacturing, healthcare, agriculture, etc. Despite the fact that the IoT has a positive impact to both people and industry, it also provides benefits for the environment. The IoT is recognized as a powerful tool in the fight against climate change. More specially, it has a significant potential in saving carbon emissions. Taking into account the promising areas of IoT application, this paper proposes a solution for real-time monitoring of vehicles and detection of rising levels of carbon emissions, called EcoLogic. The EcoLogic consists of hardware module that collects sensor data related to vehicles’ carbon emissions and cloud based applications for data processing, analysis and visualisation. Its primary purpose is to control the carbon emissions through smart notifications and vehicle’s power limitations.

7 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A model for partial agile methods adoption based on intentional (i.e., goal) perspectives is proposed that will help the software development team to easily identify the vulnerabilities associated with each goal and, in turn, help to minimize risks.
Abstract: Nowadays, the agile paradigm is one of the most important approaches used for software development besides structured and traditional life cycles. To facilitate its adoption and minimize the risks, different metamodels have been proposed trying to unify it. Yet, very few of them have focused on one fundamental question: How to partially adopt agile methods? Intuitively, choosing which practices to adopt from agile methods should be made based on their most prioritized goals in the software development process. To answer this issue, this paper proposes a model for partial agile methods adoption based on intentional (i.e., goal) perspectives. Hence, adoption can be considered as defining the goals in the model, corresponding to the intentions of the software development team. Next, by mapping with our goal-based model, suitable practices for adoption could be easily found. Moreover, the relationship between roles and their dependencies to achieve a specific goal can also be visualized. This will help the software development team to easily identify the vulnerabilities associated with each goal and, in turn, help to minimize risks.

7 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: An approach to the game bot detection in the online role player games based on the adoption of machine learning techniques in order to discriminate between users and game bots basing on some user behavioral features is proposed.
Abstract: The market of online games is highly increasing in the last years and thanks to the availability of always more effective gaming infrastructures and the increased quality of the developed games. The diffusion of on line games also increases the use of game bots to automatically perform malicious tasks obtaining some rewards (with a consequent economical advantage or popularity) in the game community with low effort. These causes the disappointment of the game players community becoming a critical issue for the game developers. For this reason, distinguishing between game bots and human behaviour being became essential in order to detect the malicious tasks and consequently increase the players satisfaction. In this paper authors propose an approach to the game bot detection in the online role player games based on the adoption of machine learning techniques in order to discriminate between users and game bots basing on some user behavioral features. The approach is applied to a real-world dataset of a popular role player game and the obtained results are encouraging.

6 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A top-down decomposition approach is proposed to specify requirements and analyse change impact on BPMN models at different stages of the business process lifecycle.
Abstract: When performing functional requirements analysis, software developers need to understand the application domain to fulfil organizational needs. This is essential for making trade-off decisions and achieving the success of the software development project. An application domain is dealt within the modelling phase of the business process lifecycle. Assuming that functional changes are inevitable, we propose to use the standard COSMIC to evaluate these changes and provide indicators of change status in the business domain. Expressing functional changes in terms of COSMIC Function Point units can be helpful in identifying changes leading to potential impact on the business process's functional size. In addition, we propose a top-down decomposition approach to specify requirements and analyse change impact on BPMN models at different

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This approach proposes designing a neighborhood structure employing a permutation process to exploit the most promising regions of the search space while considering the diversity of the population, and presents a resolution approach based on a Min- Max Tchebycheff iterated Local Search algorithm called Min-Max TLS.
Abstract: The multi-objective multidimensional knapsack problem (MOMKP) which is one of the hardest multiobjective combinatorial optimization problems, presents a formal model for many real world problems. Its main goal consists in selecting a subset of items in order to maximize m objective functions with respect to q resource constraints. For that purpose, we present in this paper a resolution approach based on a Min-Max Tchebycheff iterated Local Search algorithm called Min-Max TLS. In this approach, we propose designing a neighborhood structure employing a permutation process to exploit the most promising regions of the search space while considering the diversity of the population. Therefore, Min-Max TLS uses Min-Max N (s) as a neighborhood structure, combining a Min-Extraction-Item algorithm and a Max-Insertion-Item algorithm. Moreover, in Min-Max TLS two Tchebycheff functions, used as a selection process, are studied: the weighted Tchebycheff (WT) and the augmented weighted Tchebycheff (AugWT). Experimental results are carried out with nine well-known benchmark instances of MOMKP. Results have shown the efficiency of the proposed approach in comparison to other approaches.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: In this article, the authors present requirements that a trace links taxonomy should satisfy and present a technique to build a trace link taxonomy that has well-defined semantics, which can be configured with traceability models using Open Service for Lifecycle Collaboration (OSLC).
Abstract: Software traceability provides a means for capturing the relationship between artifacts at all phases of software and systems development. The relationships between the artifacts that are generated during systems development can provide valuable information for software and systems Engineers. It can be used for change impact analysis, systems verification and validation, among other things. However, there is no consensus among researchers about the syntax or semantics of trace links across multiple domains. Moreover, existing trace links classifications do not consider a unified method for combining all trace links types in one taxonomy that can be utilized in Requirement Engineering, Model Driven Engineering and Systems Engineering. This paper is one step towards solving this issue. We first present requirements that a trace links taxonomy should satisfy. Second, we present a technique to build a trace links taxonomy that has well-defined semantics. We implemented the taxonomy by employing the Link data and the Resource Description Framework (RDF). The taxonomy can be configured with traceability models using Open Service for Lifecycle Collaboration (OSLC) in order to capture traceability information among different artifacts and at different levels of granularity. In addition, the taxonomy offers reasoning and quantitative and qualitative analysis about trace links. We presented validation criteria for validating the taxonomy requirements and validate the solution through an example.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: It is shown how goal-oriented requirements engineering (GORE) is able to provide a strong foundation to support the evolution of requirements engineering practices and also in connection with related processes such as business analysis, technical specification and testing.
Abstract: Nowadays, mastering the requirements phase is still challenging for companies of any size and often impacts the quality, delay or cost of the delivered software system. While smaller companies may suffer from maturity, resource or tooling problems, larger companies have to cope with the larger size, complexity and crossdependencies between their projects. This paper reports about the work carried out over the past three years to address such challenges within Huawei, a very large Chinese company active worldwide in the high-tech and telecommunication sectors, with the help of experts from the requirements engineering community. We show how goal-oriented requirements engineering (GORE) is able to provide a strong foundation to support the evolution of requirements engineering practices and also in connection with related processes such as business analysis, technical specification and testing. We also report about our experience in developing adequate tool support to achieve successful industrial adoption and address team-work, scalability and toolchain integration needs. Although anchored in a specific case, most of the reported issues are shared by many companies in many domains. To further abstract away from our case, we also formulate some ”Chinese wisdom” learned, identify useful strategies for successful technology transfer and point further research challenges.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Decision procedures and criteria to check the conformance of observed execution traces against a specification set by a UML SD enriched with time constraints are investigated.
Abstract: The provisioning of a growing number of services depends on the proper interoperation of multiple products, forming a new distributed system, often subject to timing requirements. To ensure the interoperability and timely behavior of this new distributed system, it is important to conduct integration tests that verify the interactions with the environment and between the system components. Integration test scenarios for that purpose may be conveniently specified by means of UML sequence diagrams (SDs) enriched with time constraints. The automation of such integration tests requires that test components are also distributed, with a local tester deployed close to each system component, coordinated by a central tester. The distributed observation of execution events, combined with the impossibility to ensure clock synchronization in a distributed system, poses special challenges for checking the conformance of the observed execution traces against the specification, possibly yielding inconclusive verdicts. Hence, in this paper we investigate decision procedures and criteria to check the conformance of observed execution traces against a specification set by a UML SD enriched with time constraints. The procedures and criteria are specified in a formal language that allows executing and validating the specification. Examples are presented to illustrate the approach.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A new verification approach to deal with the formal verification of reconfiguration scenarios of adaptive systems, which consists of two verification steps: design time and runtime verification.
Abstract: Adaptive systems are able to modify their behaviors to cope with unpredictable significant changes at runtime such as component failures. These systems are critical for future project and other intelligent systems. Reconfiguration is often a major undertaking for systems: it might make its functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with a new configuration requires the assurance that the new configuration will fully satisfy the expected requirements. Formal verification has been widely used to guarantee that a system specification satisfies a set of properties. However, applying verification techniques at run time for any potential change can be very expensive and sometimes unfeasible. In this paper, we propose a new verification approach to deal with the formal verification of these reconfiguration scenarios. New reconfigurable CTL semantics is introduced to cover the verification of reconfigurable properties. It consists of two verification steps: design time and runtime verification. A railway case study will be also presented.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper proposes a process framework called GI-Tropos to extend I-tropos allowing to align requirements-driven software processes with IT governance.
Abstract: Requirements Engineering is closely intertwined with Information Technology (IT) Governance. Aligning IT Governance principles with Requirements-Driven Software Processes allows then to propose governance and management rules for software development to cope with stakeholders’ requirements and expectations. Typically, the goal of IT Governance in software engineering is to ensure that the results of a software organization business processes meet the strategic requirements of the organization. Requirements-driven software processes, such as (I-)Tropos, are development processes using high-level social-oriented models to drive the software life cycle both in terms of project management and forward engineering techniques. To consolidate both perspectives, this paper proposes a process framework called GI-Tropos to extend I-Tropos allowing to align requirements-driven software processes with IT governance.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A technique to detect code fragments that are incompliant to the architecture as finegrained architectural violations by defining inference rules for MVC2 architecture and applying the technique to web applications using Play Framework is proposed.
Abstract: Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as finegrained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components’ responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the current result does not include the current component, then it is detected as a violation. By defining inference rules for MVC2 architecture and applying the technique to web applications using Play Framework, we obtained accurate detection results.

Proceedings Article
03 Mar 2017
TL;DR: The hyperset approach to WEB-like or semistructured databases is outlined and the current state of affairs on experimental implementation of a query language ∆ (Delta) to such databases is described, with consideration of further implementation work to be done.
Abstract: The hyperset approach to WEB-like or semistructured databases is outlined. WDB is presented either (i) as a finite edge-labelled graph or, equivalently, (ii) as system of (hyper)set equations or (iii) in a special XML-WDB format convenient both for distributed WDB and for including arbitrary XML elements in this framework. The current state of affairs on experimental implementation of a query language ∆ (Delta) to such databases—the main result of this paper—is described, with consideration of further implementation work to be done.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper analyses the following five open source platforms for Big Data Analytics: Apache Hadoop, Cloudera, Spark, Hortonworks, and HPCC.
Abstract: Nowadays organizations look for Big Data as an opportunity to manage and explore their data with the objective to support decisions within its different operational areas. Therefore, it is necessary to analyse several concepts about Big Data Analytics, including definitions, features, advantages and disadvantages. By investigating today's big data platforms, current industrial practices and related trends in the research world, it is possible to understand the impact of Big Data Analytics on smaller organizations. This paper analyses the following five open source platforms for Big Data Analytics: Apache Hadoop, Cloudera, Spark, Hortonworks, and HPCC.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper addresses platforms developed to provide and configure virtual networks according to user’s request and needs and proposes an effective way to check the request consistency.
Abstract: In this paper, we address platforms developed to provide and configure virtual networks according to user’s request and needs. User requests are, however, not always accurate and can contain a number of inconsistencies. The requests need to be thoroughly analyzed and verified before being applied to such platforms. We consequently identify some important properties for the verification and classify them into three groups: a) functional or logic issues, b) resource allocation/dependency issues, and c) security issues. For each group, we propose an effective way to check the request consistency. The issues of the first group are checked with the use of scalable Boolean matrix operations. The properties of the second group can be verified through the use of an appropriate system of logic implications. When checking the issues of the third group, the corresponding string analysis can be utilized. All the techniques discussed in the paper are followed by a number of illustrating examples.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: An architecture is proposed for lightweight static analysis of large multilingual codebases: the MLSA architecture, which addresses the open-ended nature of multiple languages and language interoperability APIs.
Abstract: Developer preferences, language capabilities and the persistence of older languages contribute to the trend that large software codebases are often multilingual, that is, written in more than one computer language. While developers can leverage monolingual software development tools to build software components, companies are faced with the problem of managing the resultant large, multilingual codebases to address issues with security, efficiency, and quality metrics. The key challenge is to address the opaque nature of the language interoperability interface: one language calling procedures in a second (which may call a third, or even back to the first), resulting in a potentially tangled, inefficient and insecure codebase. An architecture is proposed for lightweight static analysis of large multilingual codebases: the MLSA architecture. Its modular and table-oriented structure addresses the open-ended nature of multiple languages and language interoperability APIs. We focus here as an application on the construction of call-graphs that capture both inter-language and intra-language calls. The algorithms for extracting multilingual call-graphs from codebases are presented, and several examples of multilingual software engineering analysis are discussed. The state of the implementation and testing of MLSA is presented, and the implications for future work are discussed.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A proposed generic architecture to classify analytical approaches and establishes a classification of the existing query languages, based on the facilities provided to access the big data architectures, to evaluate different solutions.
Abstract: Analytical data management applications, affected by the explosion of the amount of generated data in the context of Big Data, are shifting away their analytical databases towards a vast landscape of architectural solutions combining storage techniques, programming models, languages, and tools. To support users in the hard task of deciding which Big Data solution is the most appropriate according to their specific requirements, we propose a generic architecture to classify analytical approaches. We also establish a classification of the existing query languages, based on the facilities provided to access the big data architectures. Moreover, to evaluate different solutions, we propose a set of criteria of comparison, such as OLAP support, scalability, and fault tolerance support. We classify different existing Big Data analytics solutions according to our proposed generic architecture and qualitatively evaluate them in terms of the criteria of comparison. We illustrate how ou r proposed generic architecture can be used to decide which Big Data analytic approach is suitable in the context of several use cases.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: It is highlighted that it is required to go back to the fundamental question to have any chance to develop effective tools to help with program understanding which is the most costly part of program maintenance.
Abstract: During the last three decades several hundred papers have been published on the broad topic of “program comprehension”. The goal was always the same: to develop models and tools to help developers with program understanding during program maintenance. However few authors targeted the more fundamental question: “what is program understanding” or, other words, proposed a model of program understanding. Then we reviewed the proposed program understanding models. We found the papers to be classifiable in three period of time in accordance with the following three subtopics: the process, the tools and the goals. Interestingly, studying the fundamental goal came after the tools. We conclude by highlighting that it is required to go back to the fundamental question to have any chance to develop effective tools to help with program understanding which is the most costly part of program maintenance.

Proceedings ArticleDOI
24 Jul 2017
TL;DR: The paper presents the WOF conceptual architecture and the Java implementation the authors built from it, an object oriented framework founded on the concept of \emph{Wise Object} (WO).
Abstract: Designing Intelligent Adaptive Distributed Systems is an open research issue addressing nowadays technologies such as Communicating Objects (COT) and the Internet of Things (IoT) that increasingly contribute to our daily life (mobile phones, computers, home automation, etc.). Complexity and sophistication of those systems make them hard to understand and to master by human users, in particular end-users and developers. Those are very often involved in learning processes that capture all their attention while being of little interest for them. To alleviate human interaction with such systems and help developers to produce them, we propose WOF, an object oriented framework founded on the concept of \emph{Wise Object} (WO). A WO is a software-based entity that is able to learn on itself and also on the others (e.g. its environment). Wisdom refers to the experience (on its own behavior and on the usage done of it) such object acquires by its own during its life. In the paper, we present the WOF conceptual architecture and the Java implementation we built from it. Requirements and design principles of wise systems are presented. To provide application (e.g. home automation system) developers with relevant support, we designed WOF with the minimum intrusion in the application source code. The adaptiveness, intelligence and distribution related mechanisms defined in WOF are inherited by application classes. In our Java implementation of WOF, object classes produced by a developer inherit the behavior of Wise Object (WO) class. An instantiated system is then a Wise Object System (WOS) composed of wise objects that interact through an event bus according to publish-subscribe design pattern.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper introduces a new approach for specifying adaptive probabilistic discrete event systems, and introduces the semantics of GR-TNCES to optimize the specification of unpredictable timed reconfiguration scenario running under resources constraints.
Abstract: The features of probabilistic adaptive systems are especially the uncertainty and reconfigurability. The structure of a part of the system may be totally unknown or partially unknown at a particular time. Openness is also an inherent property, as agents may join or leave the system throughout its lifetime. This poses severe challenges for state-based specification. The languages in which probabilistic reconfigurable systems are specified should be clear and intuitive, and thus accessible to generation, inspection and modification by humans. This paper introduces a new approach for specifying adaptive probabilistic discrete event systems. We introduce the semantics of GR-TNCES to optimize the specification of unpredictable timed reconfiguration scenario running under resources constraints. We also apply this approach to specify the requirements of an automotive transport system and we evaluate its benefits.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: An energy aware real-time scheduling algorithm that makes use of the defferable server for the scheduling of aperiodic tasks along with DVFS and demonstrates a decrease in the resulting energy consumption.
Abstract: One of the major challenges of computer system design is the management and conservation of energy while satisfying QoS requirements. Recently, Dynamic Voltage and Frequency Scaling (DVFS) has been integrated to various embedded processors as a mean to increase the battery life without affecting the responsiveness of tasks. This paper proposes an enhancement for I-codesign methodology [1] optimizing the energy consumption of the designed system.We propose an energy aware real-time scheduling algorithm. This algorithm makes use of the defferable server for the scheduling of aperiodic tasks along with DVFS. Simulation results demonstrate a decrease in the resulting energy consumption compared to the previously published work.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A comprehensive summary of the current trends in the domain of WebRTC testing is provided by aggregating the results from three different sources of information, including "Grey literature”, that is, materials produced by organizations outside of the traditional commercial or academic publishing and distribution channels.
Abstract: WebRTC is the umbrella term for a number of emerging technologies that extends the web browsing model to exchange real-time media (Voice over IP, VoIP) with other browsers. The mechanisms to provide quality assurance for WebRTC are key to release this kind of applications to production environments. Nevertheless, testing WebRTC based application, consistently automated fashion is a challenging problem. The aim of this piece of research is to provide a comprehensive summary of the current trends in the domain of WebRTC testing. For the sake of completeness, we have carried out this survey by aggregating the results from three different sources of information: i) Scientific and academia research papers; ii) WebRTC testing tools (both commercial and open source); iii) "Grey literature”, that is, materials produced by organizations outside of the traditional commercial or academic publishing and distribution channels.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The proposed decision model is applied to determine the suitable evaluation methods for an adaptive hypermedia system and is based on one multi-criteria method, namely ELECTRE TRI method.
Abstract: The layered evaluation of interactive adaptive systems has to consider many evaluation methods. The best evaluation method to be used for individual layers depend on many parameters such as the evaluation criteria, the stage of the development cycle, and the characteristics of the layer under consideration. This paper presents a decision model for selecting the appropriate evaluation methods for individual layers of the interactive adaptive system. Our proposal is based on one multi-criteria method, namely ELECTRE TRI method. The proposed decision model is applied to determine the suitable evaluation methods for an adaptive hypermedia system.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: A preliminary investigation on the behaviour of existing software metric tools is proposed by using the different software analysis tools for three software systems of different sizes to show that, for the same software system and metrics, theSoftware analysis tools provide different values.
Abstract: The availability of quality models and metrics that permit an objective evaluation of the quality level of a software product is a relevant aspect for supporting software engineers during their development tasks. In addition, the adoption of software analysis tools that facilitate the measurement of software metrics and application of the quality models can ease the evaluation tasks. This paper proposes a preliminary investigation on the behaviour of existing software metric tools. Specifically, metrics values have been computed by using the different software analysis tools for three software systems of different sizes. Measurements show that, for the same software system and metrics, the software analysis tools provide different values. This could impact on the overall software quality evaluation for the aspect based on the selected metrics.