scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 2022"


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a practical and secure multifactor user authentication protocol for AV in 5G network, which is based on non-interactive zero-knowledge proof technology and physical uncloning function.
Abstract: Autonomous vehicles (AV) can not only improve traffic safety and congestion, but also have strategic significance for the development of the transportation industry. With the continuous updating of core technologies such as artificial intelligence, sensor detection, synchronous positioning, and high-precision mapping, the development of AV has been promoted. When 5G network is combined with Internet of Vehicles, the problems of AV can be solved by taking advantage of 5G ultra-large bandwidth, low latency and high reliability. However, when the user controls the vehicle remotely, a real-time and reliable authentication process is needed, while minimizing the overhead of security protocols. Therefore, this article proposes a practical and secure multifactor user authentication protocol for AV in 5G network. By introducing non-interactive zero-knowledge proof technology and physical uncloning function, the protocol completes mutual authentication and key agreement without revealing any sensitive information. The article proves the security of the protocol through BAN logic and the simulation of Scyther. And it can resist malicious attacks and provide more security features. The informal security analysis shows that the protocol can meet the proposed security requirements. Finally, we evaluate the efficiency of the protocol, and the results show that the protocol can provide better performance.

15 citations


Journal ArticleDOI
TL;DR: In this paper , the authors investigate the gap between a notebook environment and a virtual research environment and propose an embedded VRE solution for the Jupyter environment called Notebook-as-a-VRE (NaaVRE).
Abstract: Virtual Research Environments (VREs) provide user-centric support in the lifecycle of research activities, e.g., discovering and accessing research assets, or composing and executing application workflows. A typical VRE is often implemented as an integrated environment, which includes a catalog of research assets, a workflow management system, a data management framework, and tools for enabling collaboration among users. Notebook environments, such as Jupyter, allow researchers to rapidly prototype scientific code and share their experiments as online accessible notebooks. Jupyter can support several popular languages that are used by data scientists, such as Python, R, and Julia. However, such notebook environments do not have seamless support for running heavy computations on remote infrastructure or finding and accessing software code inside notebooks. This paper investigates the gap between a notebook environment and a VRE and proposes an embedded VRE solution for the Jupyter environment called Notebook-as-a-VRE (NaaVRE). The NaaVRE solution provides functional components via a component marketplace and allows users to create a customized VRE on top of the Jupyter environment. From the VRE, a user can search research assets (data, software, and algorithms), compose workflows, manage the lifecycle of an experiment, and share the results among users in the community. We demonstrate how such a solution can enhance a legacy workflow that uses Light Detection and Ranging (LiDAR) data from country-wide airborne laser scanning surveys for deriving geospatial data products of ecosystem structure at high resolution over broad spatial extents. This enables users to scale out the processing of multi-terabyte LiDAR point clouds for ecological applications to more data sources in a distributed cloud environment.

7 citations


Journal ArticleDOI
TL;DR: In this paper , an intelligent network architecture for Internet of Vehicles (IoV) services was proposed by combining network slicing and deep learning (DL) technology, and the key technologies needed to achieve the architecture.
Abstract: It aims to explore the efficient and reliable wireless transmission and cooperative communication mechanism of Internet of Vehicles (IoV) based on edge intelligence technology. It first proposes an intelligent network architecture for IoV services by combining network slicing and deep learning (DL) technology, and then began to study the key technologies needed to achieve the architecture. It designs the cooperative control mechanism of unmanned vehicle network based on the full study of wireless resource allocation algorithm from the micro level. Second, in order to improve the safety of vehicle driving, deep reinforcement learning is used to configure the wireless resources of IoV network to meet the needs of various IoV services. The research results show that the accuracy rate of the improved AlexNet algorithm model can reach 99.64%, the accuracy rate is more than 80%, the data transmission delay is less than 0.02 ms, and the data transmission packet loss rate is less than 0.05. The algorithm model has practical application value for solving the data transmission related problems of vehicular internet communication, providing an important reference value for the intelligent development of unmanned vehicle internet.

6 citations


Journal ArticleDOI
TL;DR: This work introduces a vendor‐ and technology‐agnostic method for the modeling and deployment of serverless function orchestrations, which relies on the business process model and notation (BPMN) and topology and orchestration specification for cloud applications (TOSCA) standards for modelingfunction orchestrations and their deployment, respectively.
Abstract: Function‐as‐a‐Service (FaaS) is a cloud service model enabling to implement serverless applications for a variety of use cases. These range from scheduled calls of single functions to complex function orchestrations executed using orchestration services such as AWS step functions. However, since the available function orchestration technologies vary in functionalities, supported modeling languages, and APIs, modeling such function orchestrations and their deployment require significant technology‐specific expertise. Moreover, the resulting models are typically not portable due to provider‐ and technology‐specific details, and major efforts are required when exchanging an orchestrator or provider due to such lock‐ins. To tackle this issue, we introduce a vendor‐ and technology‐agnostic method for the modeling and deployment of serverless function orchestrations, which relies on the business process model and notation (BPMN) and topology and orchestration specification for cloud applications (TOSCA) standards for modeling function orchestrations and their deployment, respectively. We also present a toolchain for modeling serverless function orchestrations in BPMN, generating proprietary models supported by different function orchestration technologies from BPMN models, specifying their actual deployment in TOSCA, and then enacting such deployment. Finally, we illustrate a case study applying our method and toolchain in practice.

6 citations


Journal ArticleDOI
TL;DR: The xDEVS framework as discussed by the authors is a model-and-simulation (M&S) framework based on the Discrete Event System Specification (DEVS) for modeling structural, behavior and information aspects of any complex system.
Abstract: Employing Modeling and Simulation (M&S) extensively to analyze and develop complex systems is the norm today. The use of robust M&S formalisms and rigorous methodologies is essential to deal with complexity. Among them, the Discrete Event System Specification (DEVS) provides a solid framework for modeling structural, behavior and information aspects of any complex system. This gives several advantages to analyze and design complex systems: completeness, verifiability, extensibility, and maintainability. DEVS formalism has been implemented in many programming languages and executable on multiple platforms. In this paper, we describe the features of an M&S framework called xDEVS that builds upon the prevalent DEVS Application Programming Interface (API) for both modeling and simulation layers, promoting interoperability between the existing platform‐specific (C++, Java, Python) DEVS implementations. Additionally, the framework can simulate the same model using sequential, parallel, or distributed architectures. The M&S engine has been reinforced with several strategies to improve performance, as well as tools to perform model analysis and verification. Finally, xDEVS also facilitates systems engineers to apply the vision of model‐based systems engineering (MBSE), model‐driven engineering (MDE), and model‐driven systems engineering (MDSE) paradigms. We highlight the features of the proposed xDEVS framework with multiple examples and case studies illustrating the rigor and diversity of application domains it can support.

6 citations


Journal ArticleDOI
TL;DR: A state-of-the-art review of Cloud Computing and Cloud of Things (CoT) is presented in this article that addressed the techniques, constraints, limitations, and research challenges.
Abstract: With the advent of the Internet of Things (IoT) paradigm, the cloud model is unable to offer satisfactory services for latency‐sensitive and real‐time applications due to high latency and scalability issues. Hence, an emerging computing paradigm named as fog/edge computing was evolved, to offer services close to the data source and optimize the quality of services (QoS) parameters such as latency, scalability, reliability, energy, privacy, and security of data. This article presents the evolution in the computing paradigm from the client‐server model to edge computing along with their objectives and limitations. A state‐of‐the‐art review of Cloud Computing and Cloud of Things (CoT) is presented that addressed the techniques, constraints, limitations, and research challenges. Further, we have discussed the role and mechanism of fog/edge computing and Fog of Things (FoT), along with necessitating amalgamation with CoT. We reviewed the several architecture, features, applications, and existing research challenges of fog/edge computing. The comprehensive survey of these computing paradigms offers the depth knowledge about the various aspects, trends, motivation, vision, and integrated architectures. In the end, experimental tools and future research directions are discussed with the hope that this study will work as a stepping‐stone in the field of emerging computing paradigms.

5 citations


Journal ArticleDOI
TL;DR: This article proposes an end‐to‐end framework named RESCUE (enabling green healthcare services using integrated iot‐edge‐fog‐cloud computing environments), consisting efficient spatio‐temporal data analytics module for efficient information sharing, and the novel path prediction module incorporates such VGI instances and predicts routes in emergencies avoiding all possible risks.
Abstract: Internet of Things (IoT) has a pivotal role in developing intelligent and computational solutions to facilitate varied real‐life applications. To execute high‐end computations and data analytics, IoT and cloud‐based solutions play the most significant role. However, frequent communication with long distant cloud servers is not a delay‐aware and energy‐efficient solution while providing time‐critical applications such as healthcare. This article explores the possibilities and opportunities of integrating cloud technology with fog and edge‐based computing to provide healthcare services to users in exigency. Here, we propose an end‐to‐end framework named RESCUE (enabling green healthcare services using integrated iot‐edge‐fog‐cloud computing environments), consisting efficient spatio‐temporal data analytics module for efficient information sharing, spatio‐temporal data analysis to predict the path for users to reach the destination (healthcare center or relief camps) with minimum delay in the time of exigency (say, natural disaster). This module analyzes the collected information through crowd‐sourcing and assists the user by extracting optimal path postdisaster when many regions are nonreachable. Our work is different from the existing literature in varied aspects: it analyses the context and semantics by augmenting real‐time volunteered geographical information (VGI) and refines it. Furthermore, the novel path prediction module incorporates such VGI instances and predicts routes in emergencies avoiding all possible risks. Also, the design of development of a latency‐aware, power‐aware data‐driven analytics system helps to resolve any spatio‐temporal query more efficiently compared to the existing works for any time‐critical application. The experimental and simulation results outperform the baselines in terms of accuracy, delay, and power consumption.

5 citations


Journal ArticleDOI
TL;DR: In this article , the authors aim to reach consensus on the DevOps benefits reported in existing literature, and two systematic literature reviews are used to map the benefits found in the first one with case studies, providing empirical evidences of each benefit.
Abstract: Among current IT work cultures, DevOps stands out as one of the most adopted worldwide. The focus of this culture is on bridging the gap between development and operations teams, enabling collaborative effort toward quickly producing software, without sacrificing its quality and support. DevOps is used to tackle a variety of issues; as such, there are differing benefits reported by authors when performing their analysis. For this research, we aim to reach consensus on the DevOps benefits reported in existing literature. To accomplish this objective, two systematic literature reviews. The first intends to find all benefits reported in the literature, while the second review will be used to map the benefits found in the first one with DevOps implementation case studies, providing empirical evidences of each benefit. To strengthen the results, the concept‐centric approach is used. During this research it was possible to observe that the most reported benefits are aligned with the DevOps premises of better collaboration between developers and operators, delivering software and products quicker. Based on DevOps implementation case studies, most reported benefits include a faster time to market as well as improvements in synergy and automation. Less reported benefits include a reduction in failed changes and security issues.

5 citations


Journal ArticleDOI
TL;DR: A method based on LSTM (Long Short‐term Memory) for malware detection which is capable of not only distinguishing malware and benign samples, but also detecting and identify the new and unseen families of malware.
Abstract: The increasing trend of smartphone capabilities has caught the attention of many users. This has led to the emergence of malware that threatening the users' privacy and security. Many malware detection methods have been proposed to deal with emerging threats. One of the most effective ones is to use network traffic analysis. This article proposed a method based on LSTM (Long Short‐term Memory) for malware detection which is capable of not only distinguishing malware and benign samples, but also detecting and identify the new and unseen families of malware. As far as we know, this is the first time that traffic data has been modeled as a sequence of flows and a sequential based deep learning model is employed. In this article, we have performed several case studies to exhibit the capabilities of the proposed method including malware detection, malware family identification, new (not seen before) malware family detection, as well as evaluating the minimum time required to detect malware. The case studies show that the model is even capable of detecting new families of malware with more than 90% accuracy, although these results can only be verified on existing families in this dataset and such a claim cannot be generalized to other examples of malware. Moreover, it is shown the model is able to detect the malware through capturing 50 connection flows (about 1600 packets in average) with the AUC of more than 99.9%.

5 citations


Journal ArticleDOI
TL;DR: This article proposes to synthesize the well‐known secure software development practices for both linear and agile lifecycle models using the MediaWiki platform, and makes this knowledge available to software developers and designers from a single source.
Abstract: Application security is an important concern, and security activities to support software development lifecycle processes, such as specification, design, implementation, and testing are increasingly in need. Despite the plethora of knowledge available for secure software development in online and books, software systems are seldom secure as developers lack security knowledge. The primary reason for this paradox is the diversity and overwhelming nature of the available security knowledge. In this article, we propose to synthesize the well‐known secure software development practices for both linear and agile lifecycle models. Using the MediaWiki platform, we make this knowledge available to software developers and designers from a single source.

5 citations


Journal ArticleDOI
TL;DR: This work proposes an approach to extract the candidate microservices from the existing legacy applications using graph based algorithms, and extracted the microservices.
Abstract: Service‐oriented architecture (SOA) has been widely used to design enterprise applications in the past two decades. The services in SOA are becoming complex with the increase in changing user requirements and SOA is still seen as monolithic from a deployment perspective. Monolithic services make the application complex, and it becomes difficult to maintain. With the evolution of microservices architecture, software architects started migrating legacy applications to microservices. However, existing migration approaches in the literature mostly focus on migrating monolithic applications to microservices. To the best of our knowledge, very few works have been done in migrating SOA applications to microservices. One of the major challenges in the migration process is the extraction of microservices from the existing legacy applications. To address this, we propose an approach to extract the candidate microservices using graph based algorithms. In particular, four algorithms are defined: (i) construction of service graph (SG), (ii) construction of task graph (TG) for each service of the a SOA application, (iii) extraction of candidate microservices using the SG of SOA application, and (iv) construction of a SG for a microservices application to retain the dependencies between the generated microservices. We chose a SOA‐based web application to demonstrate the proposed microservices extraction approach and extracted the microservices. Additionally, we have evaluated the extracted microservices and compared them with SOA based services.

Journal ArticleDOI
TL;DR: In this paper , an effective multi objective cost model based on flamingo search algorithm for materialized view selection in data warehouse design is proposed, which evaluates a multi-objective optimization problem based on the cost functions resulting from materialization.
Abstract: A materialized view selection in data warehouse management is important to speed up query processing. The data presented in data warehouses is generally stored as a set of materialized views. The major challenge is determining which views to materialize and satisfy the response time with reduced cost functions. This paper proposed an effective multi objective cost model based flamingo search algorithm for materialized view selection in data warehouse design. The multiple view processing plan structure of the data warehouse describes the search space of problem in order to select the optimal materialized views. The proposed model evaluates a multi‐objective optimization problem based on the cost functions resulting from materialization. The multiple objective functions of the proposed model are maintenance costs, current query processing costs, response cost and previous query processing costs. This model selects the top‐k views for materialization by satisfying the mentioned multi‐objective functions. The experimental results are simulated using the TPC‐H dataset. The efficacy of proposed model is measured by comparing the obtained results of the proposed model with various existing approaches.

Journal ArticleDOI
TL;DR: The problems in IoT in the quantum era are discussed and appropriate solutions by PKC schemes under limited resources in IoT are focused and as the lattice‐based cryptosystems are more effective, the importance of these schemes in the resource‐constrained IoT is highlighted.
Abstract: As the number and characteristics of smart devices change, the concept of the Internet of Things (IoT) emerges. The IoT provides the connected devices with a variety of resources that enable effective communication. At this point, several security issues arise to get the sensitive information behind every communication in the IoT. To provide users with security and privacy, cryptographic schemes are adopted, the most popular being public key cryptographic systems (PKC). However, with the advent of quantum computing, the level of security that can be provided by the PKC schemes is a big question. Another important issue is that the IoT environment is resource‐constrained, which necessitates the implementation of lightweight cryptographic algorithms for better security. In response to these issues, the post‐quantum cryptographic (PQC) schemes are one of the significant developments contributing to IoT security in the post‐quantum world. This article examines the key security issues in the IoT environment and examines the effective solutions found in the literature. The problems in IoT in the quantum era are discussed and appropriate solutions by PKC schemes under limited resources in IoT are focused. As the lattice‐based cryptosystems are more effective, the importance of these schemes in the resource‐constrained IoT is highlighted. This survey also leads to feasible future directions that can support developers and researchers in this field.

Journal ArticleDOI
TL;DR: This article contributes to a working catalogue of microservice‐specific scalability dimensions and metrics and describes a novel application of scalability goal‐obstacle analysis for the context of reasoning about microservice granularity adaptation.
Abstract: Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomous, scalable and more reliable computing. A critical problem related to microservices is reasoning about the suitable granularity level of a microservice (i.e., when and how to merge or decompose microservices). Although scalability is pronounced as one of the major factors for adoption of microservices, there is a general gap of approaches that systematically analyse the dimensions and metrics, which are important for scalability‐aware granularity adaptation decisions. To the best of our knowledge, the state‐of‐art in reasoning about microservice granularity adaptation is neither: (1) driven by microservice‐specific scalability dimensions and metrics nor (2) follow systematic scalability analysis to make scalability‐aware adaptation decisions. In this article, we address the aforementioned problems using a two‐fold contribution. Firstly, we contribute to a working catalogue of microservice‐specific scalability dimensions and metrics. Secondly, we describe a novel application of scalability goal‐obstacle analysis for the context of reasoning about microservice granularity adaptation. We analyse both contributions by comparing their usage on a hypothetical microservice architecture against ad‐hoc scalability assessment for the same architecture. This analysis shows how both contributions can aid making scalability‐aware granularity adaptation decisions.

Journal ArticleDOI
TL;DR: The simulation results of the iFogSim simulator showed that the imperialist competitive algorithm with the proposed graph partitioning approach has improved service placement on the fog infrastructure compared to the genetic algorithm and best‐fit algorithm.
Abstract: Internet of Things (IoT) represents a new generation of information and communication technology for anyone, anytime and anywhere. Cloud service‐based IoT applications significantly increase latency and network utilization. The fog environment is closer to the user to perform computing, communication, and storage tasks on network edge devices. Therefore, it can greatly reduce the latency of real‐time applications. It is an essential feature of fog computing and its most important advantage compared to cloud computing. This study proposed a new approach to service placement generated by running applications on IoT devices in the fog computing. IoT devices send applications to the fog environment that each application contains a set of services. The purpose of solving the IoT services placement problem is to efficiently deploy these services on fog cells. For this purpose, it is assumed that the received services from the IoT applications are received as a directed acyclic graph that depicts the communication between the cells within the graph that shows the communication between the services. Then, the imperialist competitive algorithm is used to place and select the destination for IoT services. The simulation results of the iFogSim simulator in different experiments showed that the imperialist competitive algorithm with the proposed graph partitioning approach has improved service placement on the fog infrastructure compared to the genetic algorithm and best‐fit algorithm.

Journal ArticleDOI
TL;DR: Borderless is proposed to foster global travel allowing travelers and countries collaboratively engage in a secure adaptive proof protocol dubbed Proof‐of‐COVID‐19 status a number of arbitrary statements to ascertain the fact that the traveler poses no danger irrespective of the country located.
Abstract: COVID‐19 pandemic undoubtedly lingers on and has brought unprecedented changes globally including travel arrangements. Blockchain‐based solutions have been proposed to aid travel amid the pandemic hap. Presently, extant solutions are country or regional‐based, downplay privacy, non‐responsive, often impractical, and come with blockchain‐related complexities presenting technological hurdle for travelers. We therefore propose a solution namely, Borderless to foster global travel allowing travelers and countries collaboratively engage in a secure adaptive proof protocol dubbed Proof‐of‐COVID‐19 status a number of arbitrary statements to ascertain the fact that the traveler poses no danger irrespective of the country located. As far as we know, this is first of its kind. Borderless is implemented as a decentralized application leveraging blockchain as a trust anchor and decentralized storage technology. Security analysis and evaluation are performed proving security, privacy‐preservation, and cost‐effectiveness along with implementation envisioning it as a blueprint to facilitate cross‐border travel during the present and future pandemics. Our experimental results show it takes less than 60 and 3 s to onboard users and perform proof verification respectively attesting to real usability scenarios along with the traits of arbitrary proofs to aid responsiveness to the dynamics of pandemics and blockchain abstraction from travelers.

Journal ArticleDOI
TL;DR: In this article , the authors investigated the research efforts that have been conducted on the creation of assistants for software design, construction and maintenance paying special attention to the user-assistant interactions.
Abstract: The increasing essential complexity of software systems makes current software engineering methods and practices fall short in many occasions. Software assistants have the ability to help humans achieve a variety of tasks, including the development of software. Such assistants, which show human‐like competences such as autonomy and intelligence, help software engineers do their job by empowering them with new knowledge. This article investigates the research efforts that have been conducted on the creation of assistants for software design, construction and maintenance paying special attention to the user‐assistant interactions. To this end, we followed the standard systematic mapping study method to identify and classify relevant works in the state of the art. Out of the 7580 articles resulting from the automatic search, we identified 112 primary studies that present works which qualify as software assistants. We provide all the resources needed to reproduce our study. We report on the trends and goals of the assistants, the tasks they perform, how they interact with users, the technologies and mechanisms they exploit to embed intelligence and provide knowledge, and their level of automation. We propose a classification of software assistants based on interactions and present an analysis of the different automation patterns. As outcomes of our study, we provide a classification of software assistants dealing with the design, construction and maintenance phases of software development, we discuss the results, identify open lines of work and challenges and call for new innovative and rigorous research efforts in this field.

Journal ArticleDOI
TL;DR: A privacy‐preserving method based on the content extraction signature scheme to enable patients to establish fine‐grained privacy protection and a Byzantine fault‐tolerant leader election mechanism that enhances the security of the Raft algorithm while providing efficiency in the data sharing.
Abstract: Health Internet of Things (Health IoT) has been limited by isolated information and a lack of security. As the combination of blockchain and Health IoT could potentially address these two limitations, it has attracted significant interest. However, blockchain‐based systems often fail to balance data sharing and privacy protection. Therefore, we proposed a Health IoT‐based privacy‐preserving data sharing blockchain system. We designed a privacy‐preserving method based on the content extraction signature scheme to enable patients to establish fine‐grained privacy protection. We designed a Byzantine fault‐tolerant leader election mechanism that enhances the security of the Raft algorithm while providing efficiency in the data sharing. Furthermore, we designed a summary contract to ensure efficient data retrieval. The proposed mechanism was evaluated in terms of the efficiency and security. The simulation and analysis results demonstrate that our scheme offers a secure and effective technique for achieving privacy‐preserving and efficient sharing of IoT medical data.

Journal ArticleDOI
TL;DR: This work presents the framework CppyABM, a framework that unifies ABM features by providing identical ABM semantic and development styles in both C++ and Python as well as the essential binding tools to expose a certain functionality from C++ to Python.
Abstract: Agent‐based modeling (ABM) has been extensively used to study the collective behavior of systems emerging from the interaction of numerous independent individuals called agents. Python and C++ are commonly used for ABM thanks to their unique features; the latter offers superior performance while the former provides ease‐of‐use and rich libraries in data science, visualization, and machine learning. We present the framework CppyABM that unifies these features by providing identical ABM semantic and development styles in both C++ and Python as well as the essential binding tools to expose a certain functionality from C++ to Python. The binding feature allows users to tailor and further extend a type or function within Python while it is originally defined in C++. Using CppyABM, users can choose either C++ or Python depending on their expertise and the specialty of the model or combine them to benefit from the advantages of both languages simultaneously. We provide showcases of CppyABM capabilities using several examples in computational biology, ecology, and virology. These examples are implemented in different formats using either C++ or Python or a combination of both to provide a comparison between the performance of implementation scenarios. The results of the example show a clear performance advantage of the models entirely or partly implemented in C++ compared to purely Python‐based implementations.

Journal ArticleDOI
TL;DR: The European Environment for Scientific Software Installations (EESSI) project aims to provide a ready-to-use stack of scientific software installations that can be leveraged easily on a variety of platforms, ranging from personal workstations to cloud environments and supercomputer infrastructure, without making compromises with respect to performance as mentioned in this paper .
Abstract: Getting scientific software installed correctly and ensuring it performs well has been a ubiquitous problem for several decades now, which is compounded currently by the changing landscape of computational science with the (re‐)emergence of different microprocessor families, and the expansion to additional scientific domains like artificial intelligence and next‐generation sequencing. The European Environment for Scientific Software Installations (EESSI) project aims to provide a ready‐to‐use stack of scientific software installations that can be leveraged easily on a variety of platforms, ranging from personal workstations to cloud environments and supercomputer infrastructure, without making compromises with respect to performance. In this article, we provide a detailed overview of the project, highlight potential use cases, and demonstrate that the performance of the provided scientific software installations can be competitive with system‐specific installations.

Journal ArticleDOI
TL;DR: An efficient algorithm for the selection process based on analytic hierarchy process multi‐criteria decision‐making technique is presented, which may significantly reduce the overall cost of the data centers.
Abstract: Increasing resource efficiency and reducing the energy consumption of cloud data centers is critical, especially during the global CORONA virus pandemic. Virtual machines' consolidation using live migration maximizes the hosts' and the reduction of energy consumption. An increase in the host's virtual machines in the consolidation process and the dynamic workload of the virtual machines may cause the overloading in the hosts. One approach to overcome this problem is reducing the hosts' virtual machines. One crucial issue to improve the quality of the consolidation process's quality is determining the best virtual machine for the migration process. Although the selection process has lower computational complexity than other challenges (like placement and overload prediction) in the consolidation process, this issue has received less attention. This article aims to present an efficient algorithm for the selection process. We first considered five main criteria for the selection process: migration time, migration risk, virtual machine connectivity, releasable resources, and penalty for SLA violation. Then, we propose an algorithm based on analytic hierarchy process multi‐criteria decision‐making technique. Next, to determine the weight of the proposed criteria, we simulate thousands of virtual machines of the PlanetLab workloads. These weights are tunable based on the data center preferences. The results of the suggested approach results show 23% reduction in the hosts' energy consumption, 49% reduction in the number of migrations, and 18% reduction in the SLA violation compared with other techniques. So, using the proposed method may significantly reduce the overall cost of the data centers.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a crowd-based requirements engineering by valuation argumentation (CrowdRE•VArg) approach that analyzes the end-users discussion in the Reddit forum and identifies conflict-free new features, design alternatives, or issues, and reach a rationale-based requirement decision by gradually valuating the relative strength of their supporting and attacking arguments.
Abstract: User forums enable a large population of crowd‐users to publicly share their experience, useful thoughts, and concerns about the software applications in the form of user reviews. Recent research studies have revealed that end‐user reviews contain rich and pivotal sources of information for the software vendors and developers that can help undertake software evolution and maintenance tasks. However, such user‐generated information is often fragmented, with multiple viewpoints from various stakeholders involved in the ongoing discussions in the Reddit forum. In this article, we proposed a crowd‐based requirements engineering by valuation argumentation (CrowdRE‐VArg) approach that analyzes the end‐users discussion in the Reddit forum and identifies conflict‐free new features, design alternatives, or issues, and reach a rationale‐based requirements decision by gradually valuating the relative strength of their supporting and attacking arguments. The proposed approach helps to negotiate the conflict over the new features or issues between the different crowd‐users on the run by finding a settlement that satisfies the involved crowd‐users in the ongoing discussion in the Reddit forum using argumentation theory. For this purpose, we adopted the bipolar gradual valuation argumentation framework, extended from the abstract argumentation framework and abstract valuation framework. The automated CrowdRE‐VArg approach is illustrated through a sample crowd‐users conversation topic adopted from the Reddit forum about Google Map mobile application. Finally, we applied natural language processing and different machine learning algorithms to support the automated execution of the CrowdRE‐VArg approach. The results demonstrate that the proposed CrowdRE‐VArg approach works as a proof‐of‐concept and automatically identifies prioritized requirements‐related information for software engineers.

Journal ArticleDOI
TL;DR: In this article , an industrial case study is carried out to collect repository data and practitioners' views on six typical architectural smells in a real MSA-based telecommunication system, and a five-aspect conceptual classification with technology, project, organization, business, and professional is proposed, in which business and organization aspects take the major roles.
Abstract: As a recently predominant architecture style, MicroService Architecture (MSA) is likely to suffer the issues of poor maintainability due to inappropriate microservice boundaries. Architectural Smell (AS), as a metaphor for potential architectural issues that may have negative impacts on software maintenance, can be used to pinpoint refactoring opportunity for evolving microservice boundary. However, existing studies mostly focus on AS detection with little further investigation on the possible impacts, causes, and solutions of AS, which does little help in addressing the bad smells in architecture. Our goal in this study is to bridge this gap by investigating the possible impacts, causes, and solutions of AS in MSA‐based systems. An industrial case study is carried out to collect repository data and practitioners' views on six typical ASes in a real MSA‐based telecommunication system. Statistical Analysis and Coding techniques are used in the analyses of quantitative and qualitative data respectively. The results show that AS influences the modularity, modifiability, analyzability, and testability of the MSA‐based system, which further induce extra cross‐team communication, change‐ and fault‐prone microservices. To explore the causes for AS, a five‐aspect conceptual classification with technology, project, organization, business, and professional is proposed, in which the business and organization aspects take the major roles. Both technical and non‐technical solutions are distilled to deal with ASes despite potential constraints. These results and their comparison to current literature are discussed, which provide practical implications in coping with AS in microservices.

Journal ArticleDOI
TL;DR: This article explores the synergy of Scrum and Essence, a domain model of software engineering processes, intending to become a common ground for software development methods, bringing clarity into the composition of methods from individual practices.
Abstract: We live at an exciting time where software has become a dominant aspect of our everyday life. Although software provides opportunities for improving various aspects of our society, it also presents many challenges. One of them is development, deployment, and sustaining of high quality software on a broad scale. While agile methods (Scrum being one of the most prominent examples) ease the process, their popularity deteriorates the clarity and simplicity they were once meant to bring into software development. This article explores the synergy of Scrum and Essence, a domain model of software engineering processes, intending to become a common ground for software development methods, bringing clarity into the composition of methods from individual practices. This short communication motivates the interplay of Scrum and Essence, being accompanied with a set of videotutorials and 21 Scrum Essential cards to further guide more effective team's way of working.

Journal ArticleDOI
TL;DR: The OntoSuSD as discussed by the authors is a formal, generic, consistent, and shared knowledge base containing semantic terminology and description of concepts and relationships generated around the representation and implementation of lean, agile, and green approaches in software development processes.
Abstract: Different software development approaches (SDAs) are developed with broad portfolios of development processes. Each of the approaches has certain exclusive principles, practices, thinking, and values, which are informally represented, implemented, and improperly institutionalized. Ontologies are developed for the representation, assessment, and adaptation of SDAs separately without having a shared terminology which may lead to terminological conflict and confusion affecting the simultaneous representation and implementation in software development industry and academia. The software engineering approaches does not consider and support sustainability as priority concern. However, the approaches have capabilities of supporting sustainable software development in different sustainability aspects. This research article aims for the designing and development of an integrated ontology of software engineering approaches (i.e., agile, lean, and green) named OntoSuSD (ontology for sustainable software development) to support sustainable software development knowledge, awareness, and implementation. The goal of OntoSuSD is to propose, design and develop a formal, generic, consistent, and shared knowledge base containing semantic terminology and description of concepts and relationships generated around the representation and implementation of lean, agile, and green approaches in software development processes, which will facilitate their simultaneous implementation and assessment for sustainable software development. The OntoSuSD is developed using practical ontology engineering methodology by reusing relevant ontologies and explicit concepts and properties are defined to fulfill knowledge requirements and representations of the domain. The OntoSuSD is evaluated, and results infer OntoSuSD has high ontological design, good domain coverage, potential applications and achieves purpose of the ontology development.

Journal ArticleDOI
TL;DR: It is confirmed that there is a performance gain by increasing the number of threads to execute an integration process, but observed that the continuous increment of threads leads to degradation of the performance in this model.
Abstract: Increasingly enterprises rely on software applications to support their business processes. Since such processes are continually evolving to keep up with market dynamism, companies strive to increase their efficiency, for example, by optimising the integration of applications supporting these processes. Integration platforms are specialised software tools that allow creating integration processes so that applications can share data and functionality. However, this integration involves several challenges, especially when large volumes of heterogeneous data should be integrated and shared. The performance of an integration process, in terms of message processing, is directly related to the run‐time system of the integration platform. This article investigates the impact of the volume of messages and the number of threads used by a run‐time system on makespan and performance of an integration process. The greater is the number of messages per second received by the integration process, the high is the volume of messages. The study was based on a run‐time system with task‐based execution model and follows a strict protocol to conduct and report our empirical study. We observed an increment of makespan when increasing the volume of messages to integration processes and different behaviours when increasing the number of threads used in their executions. Makespan reduces as the number of threads increases, but only when the volume of inbound messages is not very high. We confirmed that there is a performance gain by increasing the number of threads to execute an integration process, but observed that the continuous increment of threads leads to degradation of the performance in this model.

Journal ArticleDOI
TL;DR: LLAMA as mentioned in this paper is a C++ library that provides a data structure abstraction layer with example implementations for multidimensional arrays of nested, structured data and provides fully C++ compliant methods for defining and switching custom memory layouts for user-defined data types.
Abstract: The performance gap between CPU and memory widens continuously. Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program. This can be accomplished via a zero-runtime-overhead abstraction layer, underneath which memory layouts can be freely exchanged. We present the low-level abstraction of memory access (LLAMA), a C++ library that provides such a data structure abstraction layer with example implementations for multidimensional arrays of nested, structured data. LLAMA provides fully C++ compliant methods for defining and switching custom memory layouts for user-defined data types. The library is extensible with third-party allocators. Providing two close-to-life examples, we show that the LLAMA-generated array of structs and struct of arrays layouts produce identical code with the same performance characteristics as manually written data structures. Integrations into the SPEC CPU® lbm benchmark and the particle-in-cell simulation PIConGPU demonstrate LLAMA's abilities in real-world applications. LLAMA's layout-aware copy routines can significantly speed up transfer and reshuffling of data between layouts compared with naive element-wise copying. LLAMA provides a novel tool for the development of high-performance C++ applications in a heterogeneous environment.

Journal ArticleDOI
TL;DR: In this article , the authors present a systematic literature review on the approaches for the automated detection of bugs and vulnerabilities in smart contracts (SCs) and identify the open problems in this research field to provide possible directions to future researchers.
Abstract: Blockchain is a platform of distributed elaboration, which allows users to provide software for a huge range of next‐generation decentralized applications without involving reliable third parties. Smart contracts (SCs) are an important component in blockchain applications: they are programmatic agreements among two or more parties that cannot be rescinded. Furthermore, SCs have an important characteristic: they allow users to implement reliable transactions without involving third parties. However, the advantages of SCs have a price. Like any program, SCs can contain bugs, some of which may also constitute security threats. Writing correct and secure SCs can be extremely difficult because, once deployed, they cannot be modified. Although SCs have been recently introduced, a large number of approaches have been proposed to find bugs and vulnerabilities in SCs. In this article, we present a systematic literature review on the approaches for the automated detection of bugs and vulnerabilities in SCs. We survey 68 papers published between 2015 and 2020, and we annotate each paper according to our classification framework to provide quantitative results and find possible areas not explored yet. Finally, we identify the open problems in this research field to provide possible directions to future researchers.

Journal ArticleDOI
TL;DR: This article presents the SDK4ED platform that enables efficient technical debt management at the code level, and evaluates its capabilities in an industrial setting, and examines the usability of the platform and the financial implications of its usage.
Abstract: Technical debt management is of paramount importance for the software industry, since maintenance is the costlier activity in the software development lifecycle. In this article, we present the SDK4ED platform that enables efficient technical debt management (i.e., measurement, evolution analysis, prevention, etc.) at the code level, and evaluate its capabilities in an industrial setting. The SDK4ED platform is the outcome of a 3‐year project, including several software industries. Since, the research rigor of the approaches that reside in SDK4ED have already been validated, in this work we focus: (a) on the presentation of the platform per se; (b) the evaluation of its industrial relevance; (c) the usability of the platform; as well as (d) the financial implications of its usage.

Journal ArticleDOI
TL;DR: This paper proposes within this paper a new XML diff algorithm called jats‐diff, able to support bijection between higher‐level modifications made by the authors, such as structural changes and restyling, and the changes detected between XML documents.
Abstract: The writing of digital text documents has become a longer process that usually goes through revision rounds. Document comparison is important for the human reader interested in changes made by the authors. These documents contain structural data using text‐centric XML as one of their main storage systems. Current XML diff algorithms are able to represent differences with a limited number of edit operations: insert, delete, move and update. This approach does not fit the scope of digital text document comparison where the human reader needs to understand actual modifications made by the author. With JATS being a text‐centric XML vocabulary, we propose within this paper a new XML diff algorithm called jats‐diff, able to support bijection between higher‐level modifications made by the authors, such as structural changes and restyling, and the changes detected between XML documents. In addition, jats‐diff provides similarity information between different nodes in order to measure the impact of the text changes on the XML tree.