scispace - formally typeset
Search or ask a question

Showing papers in "Acta Cybernetica in 2021"


Journal ArticleDOI
TL;DR: A location aware and decentralized multi layer model of resource discovery (LaMRD) in IoT that guarantees some important security properties and it showed a lower latency comparing to the cloud based and decentralized resource discovery.
Abstract: The resources in the Internet of Things (IoT) network are distributed among different parts of the network. Considering huge number of IoT resources, the task of discovering them is challenging. While registering them in a centralized server such as a cloud data center is one possible solution, but due to billions of IoT resources and their limited computation power, the centralized approach leads to some efficiency and security issues. In this paper we proposed a location aware and decentralized multi layer model of resource discovery (LaMRD) in IoT. It allows a resource to be registered publicly or privately, and to be discovered in a decentralized scheme in the IoT network. LaMRD is based on structured peer-to-peer (p2p) scheme and follows the general system trend of fog computing. Our proposed model utilizes Distributed Hash Table (DHT) technology to create a p2p scheme of communication among fog nodes. The resources are registered in LaMRD based on their locations which results in a low added overhead in the registration and discovery processes. LaMRD generates a single overlay and it can be generated without specific organizing entity or location based devices. LaMRD guarantees some important security properties and it showed a lower latency comparing to the cloud based and decentralized resource discovery.

4 citations



Journal ArticleDOI
TL;DR: The previous work presents the development of a fault detection and exclusion (FDE) algorithm of GNSS measurements, an extension of an existing tightlycoupled navigation lter, which integrates the measurements from GNSS and an inertial.
Abstract: This publication presents the development of integrity monitoring and fault detection and exclusion (FDE) of pseudorange measurements, which are used to aid a tightly-coupled navigation filter. This filter is based on an inertial measurement unit (IMU) and is aided by signals of the global navigation satellite system (GNSS). Particularly, the GNSS signals include global positioning system (GPS) and Galileo. By using GNSS signals, navigation systems suffer from signal interferences resulting in large pseudorange errors. Further, a higher number of satellites with dual-constellation increases the possibility that satellite observations contain multiple faults. In order to ensure integrity and accuracy of the filter solution, it is crucial to provide sufficient fault-free GNSS measurements for the navigation filter. For this purpose, a new hybrid strategy is applied, combining conventional receiver autonomous integrity monitoring (RAIM) and innovative robust set inversion via interval analysis (RSIVIA). To further improve the performance, as well as the computational efficiency of the algorithm, the estimated velocity and its variance from the navigation filter is used to reduce the size of the RSIVIA initial box. The designed approach is evaluated with recorded data from an extensive real-world measurement campaign, which has been carried out in GATE Berchtesgaden, Germany. In GATE, up to six Galileo satellites in orbit can be simulated. Further, the signals of simulated Galileo satellites can be manipulated to provide faulty GNSS measurements, such that the fault detection and identification (FDI) capability can be validated. The results show that the designed approach is able to identify the generated faulty GNSS observables correctly and improve the accuracy of the navigation solution. Compared with traditional RSIVIA, the designed new approach provides a more timely fault identification and is computationally more efficient.

2 citations



Journal ArticleDOI
TL;DR: An interval method based on the Pontryagin Minimum Principle is proposed to enclose the solutions of an optimal control problem with embedded bounded uncertainties and is used to compute an enclosure of all optimal trajectories of the problem.
Abstract: An interval method based on Pontryagin’s Minimum Principle is proposed to enclose the solutions of an optimal control problem with embedded bounded uncertainties This method is used to compute an enclosure of all optimal trajectories of the problem, as well as open loop and closed loop enclosures meant to validate an optimal guidance algorithm on a concrete system with inaccurate knowledge of the parameters The differences in geometry of these enclosures are exposed, and showcased on a simple system These enclosures can guarantee that a given optimal control problem will yield a satisfactory trajectory for any realization of the uncertainties Contrarily, the probability of failure may not be eliminated and the problem might need to be adjusted

1 citations


Journal ArticleDOI
TL;DR: This paper argues for a modern visualisation of Jenkins pipelines, and presents the solution for making Jenkins pipelines comprehensible on the dashboard.
Abstract: Continuous Integration (CI) is an essential approach in modern software engineering. CI tools help merging the recent commits from the developers, thus the bugs can be realized in an early phase of development and integration hell can be avoided. Jenkins is the most well-known and most widely-used CI tool. Pipelines become first-class citizen in Jenkins 2. Pipelines consist of stages, such as compiling, building Docker image, integration testing, etc. However, comprehensive Jenkins pipelines are hard to see through and understand. In this paper, we argue for a modern visualisation of Jenkins pipelines. We present our solution for making Jenkins pipelines comprehensible on the dashboard.

1 citations



Journal ArticleDOI
TL;DR: A novel interval contractor based on the confidence assigned to a random variable that makes it possible to consider at the same time an interval in which the quantity is guaranteed to be, and a confidence level to reduce the pessimism induced by interval approach is proposed.
Abstract: A novel interval contractor based on the confidence assigned to a random variable is proposed in this paper It makes possible to consider at the same time an interval in which the quantity is guaranteed to be, and a confidence level to reduce the pessimism induced by interval approach This contractor consists in computing a confidence region Using different confidence levels, a particular case of potential cloud can be computed As application, we propose to compute the reachable set of an ordinary differential equation under the form of a set of confidence regions, with respect to confidence levels on initial value

1 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a new algorithm that accesses and interacts with a networked system that runs the unknown protocol in order to infer the Mealy machine representing the protocol's state machine.
Abstract: In this work, we propose a novel solution to the problem of inferring the state machine of an unknown protocol. We extend and improve prior results on inferring Mealy machines, and present a new algorithm that accesses and interacts with a networked system that runs the unknown protocol in order to infer the Mealy machine representing the protocol's state machine. To demonstrate the viability of our approach, we provide an implementation and illustrate the operation of our algorithm on a simple example protocol, as well as on two real-world protocols, Modbus and MQTT.

1 citations


Journal ArticleDOI
TL;DR: This goal is to provide a full-fledged implementation of GRIN by combining the currently available best technologies like LLVM, and evaluate the framework’s effectiveness by measuring how the optimizer improves the performance of certain programs.
Abstract: GRIN is short for Graph Reduction Intermediate Notation, a modern back end for lazy functional languages. Most of the currently available compilers for such languages share a common flaw: they can only optimize programs on a per-module basis. The GRIN framework allows for interprocedural whole program analysis, enabling optimizing code transformations across functions and modules as well. Some implementations of GRIN already exist, but most of them were developed only for experimentation purposes. Thus, they either compromise on low level efficiency or contain ad hoc modifications compared to the original specification. Our goal is to provide a full-fledged implementation of GRIN by combining the currently available best technologies like LLVM, and evaluate the framework’s effectiveness by measuring how the optimizer improves the performance of certain programs. We also present some improvements to the already existing components of the framework. Some of these improvements include a typed representation for the intermediate language and an interprocedural program optimization, the dead data elimination.

1 citations



Journal ArticleDOI
TL;DR: The problem of a safe trajectory tracking is addressed in this paper by using the results of a validated path planner: a set of safe trajectories, which produces the set of controls to apply to remain inside this set of planned trajectories while avoiding static obstacles.
Abstract: The problem of a safe trajectory tracking is addressed in this paper. It consists in using the results of a validated path planner providing a set of safe trajectories to produce the set of controls to apply to remain inside this set of planned trajectories while avoiding static obstacles. This computation is performed using the differential flatness of many dynamical systems. The method is illustrated in the case of the Dubins car.

Journal ArticleDOI
TL;DR: In this article, the authors propose a method to detect errors between the components and their connections, which is capable of revealing errors, which are hidden in the middle of a component, by calculating their pre-and postconditions.
Abstract: P4 is a domain-specific language to develop the packet processing of network devices. These programs can easily hide errors, therefore we give a solution to analyze them and detect predefined errors in them. This paper shows the idea, which works with the P4 code as a set of components and processes them one by one, while calculating their pre- and postconditions. This method does not only detect errors between the components and their connections, but it is capable to reveal errors, which are hidden in the middle of a component. The paper introduces the method and shows its calculation in an example.




Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach to reduce duplicate data dependency transfers in DEWE v3 using Function as a Service (FaaS) platform for processing non-interactive applications.
Abstract: Scientific workflows have been an increasingly important research area of distributed systems (such as cloud computing). Researchers have shown an increased interest in the automated processing scientific applications such as workflows. Recently, Function as a Service (FaaS) has emerged as a novel distributed systems platform for processing non-interactive applications. FaaS has limitations in resource use (e.g., CPU and RAM) as well as state management. In spite of these, initial studies have already demonstrated using FaaS for processing scientific workflows. DEWE v3 executes workflows in this fashion, but it often suffers from duplicate data transfers while using FaaS. This behaviour is due to the handling of intermediate data dependencies after and before each function invocation. These data dependencies could fill the temporary storage of the function environment. Our approach alters the job dispatch algorithm of DEWE v3 to reduce data dependency transfers. The proposed algorithm schedules jobs with precedence requirements to primarily run in the same function invocation. We evaluate our proposed algorithm and the original algorithm with small- and large-scale Montage workflows. Our results show that the improved system can reduce the total workflow execution time of scientific workflows over DEWE v3 by about 10\% when using AWS Lambda.

Journal ArticleDOI
TL;DR: This work presents a proposal for adapting this language from the functional to the object-oriented programming paradigm, using Java in place of Erlang as a representative, and formally defines the chosen base refactoring as a composition of scheme instances.
Abstract: Many development environments offer refactorings to improve specific properties of software, but we have no guarantees that these transformations indeed preserve the functionality of the source code they are applied on. An existing domain-specific language, currently specialized for Erlang, makes it possible to formalize automatically verifiable refactorings via instantiating predefined transformation schemes with conditional term rewrite rules. We present a proposal for adapting this language from the functional to the object-oriented programming paradigm, using Java in place of Erlang as a representative. The behavior-preserving property of discussed refactorings is characterized with a multilayered definition of equivalence for Java programs, including the conformity relation of class hierarchies. Based on the decomposition of a complex refactoring rule, we show how new transformation schemes can be identified, along with modifications and extensions of the description language required to accommodate them. Finally, we formally define the chosen base refactoring as a composition of scheme instances.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a parallel execution of DISSECT-CF to simulate large-scale Internet of Things (IoT) scenarios, such as serverless computing systems.
Abstract: Discrete Event Simulation (DES) frameworks gained significant popularity to support and evaluate cloud computing environments. They support decision-making for complex scenarios, saving time and effort. The majority of these frameworks lack parallel execution. In spite being a sequential framework, DISSECT-CF introduced significant performance improvements when simulating Infrastructure as a Service (IaaS) clouds. Even with these improvements over the state of the art sequential simulators, there are several scenarios (e.g., large scale Internet of Things or serverless computing systems) which DISSECT-CF would not simulate in a timely fashion. To remedy such scenarios this paper introduces parallel execution to its most abstract subsystem: the event system. The new event subsystem detects when multiple events occur at a specific time instance of the simulation and decides to execute them either on a parallel or a sequential fashion. This decision is mainly based on the number of independent events and the expected workload of a particular event. In our evaluation, we focused exclusively on time management scenarios. While we did so, we ensured the behaviour of the events should be equivalent to realistic, larger-scale simulation scenarios. This allowed us to understand the effects of parallelism on the whole framework, while we also shown the gains of the new system compared to the old sequential one. With regards to scaling, we observed it to be proportional to the number of cores in the utilised SMP host.

Journal ArticleDOI
TL;DR: By separating the various concerns in the transformation process, this approach enables modular and language-parametric implementation and proposes high-level abstractions for refactoring definition and outlines a generic framework which is capable of verifying and executingRefactoring specifications.
Abstract: Refactoring has to preserve the dynamics of the transformed program with respect to a particular definition of semantics and behavioural equivalence. In general, it is rather challenging to relate executable refactoring implementations with the formal semantics of the transformed language. However, in order to make refactoring tools trustworthy, we may need to provide formal guarantees on correctness. In this paper, we propose high-level abstractions for refactoring definition and we outline a generic framework which is capable of verifying and executing refactoring specifications. By separating the various concerns in the transformation process, our approach enables modular and language-parametric implementation.