scispace - formally typeset
Search or ask a question

Showing papers in "Acta Cybernetica in 2020"


Journal ArticleDOI
TL;DR: A comparison analysis between the proposed deterministic bounding method and the classical least-squares adjustment has been conducted in terms of accuracy and reliability, and a new concept of Minimum Detectable Biases is proposed.
Abstract: Reliable confidence domains for positioning with Global Navigation Satellite System (GNSS) and inconsistency measures for the observations are of great importance for any navigation system, especially for safety critical applications. In this work, deterministic error bounds are introduced in form of intervals to assess remaining observation errors. The intervals can be determined based on expert knowledge or - as in our case - based on a sensitivity analysis of the measurement correction process. Using convex optimization, bounding zones are computed for GPS positioning, which satisfy the geometrical constraints imposed by the observation intervals. The bounding zone is a convex polytope. When exploiting only the navigation geometry, a confidence domain is computed in form of a zonotope. We show that the relative volume between the polytope and the zonotope can be considered as an inconsistency measure. A small polytope volume indicates bad consistency of the observations. In extreme cases, empty sets are obtained which indicates large outliers. We explain how shape and volume of the polytopes are related to the positioning geometry. Furthermore, we propose a new concept of Minimum Detectable Biases. Using the example of the Klobuchar ionospheric model and Saastamoinen tropospheric model, we show how observation intervals can be determined via sensitivity analysis of these correction models for a real measurement campaign. Taking GPS code data from simulations and real experiments, a comparison analysis between the proposed deterministic bounding method and the classical least-squares adjustment has been conducted in terms of accuracy and reliability. It shows that the computed polytopes always enclose the reference trajectory. In case of large outliers, large position deviations persist in the least-squares solution while the polytope algorithm yields empty sets and thus successfully detects the cases with outliers.

8 citations


Journal ArticleDOI
TL;DR: In recent years, numerous interval-based simulation techniques have been developed which allow for a verified computation of outer interval enclosures for the sets of reachable states of dynamic systems represented by finitedimensional sets of ordinary differential equations (ODEs).
Abstract: In many fields of engineering as well as computational physics, it is necessary to describe dynamic phenomena which are characterized by an infinitely long horizon of past state values. This infinite horizon of past data then influences the evolution of future state trajectories. Such phenomena can be characterized effectively by means of fractional-order differential equations. In contrast to classical linear ordinary differential equations, linear fractional-order models have frequency domain characteristics with amplitude responses that deviate from the classical integer multiples of ±20 dB per frequency decade and, respectively, deviate from integer multiples of ± 2 in the limit values of their corresponding phase response. Although numerous simulation approaches have been developed in recent years for the numerical evaluation of fractional-order models with point-valued initial conditions and parameters, the robustness analysis of such system representations is still a widely open area of research. This statement is especially true if interval uncertainty is considered with respect to initial states and parameters. Therefore, this paper summarizes the current state-of-the-art concerning the simulation-based analysis of fractional-order dynamics with a restriction to those approaches that can be extended to set-valued (interval) evaluations for models with bounded uncertainty. Especially, it is shown how verified simulation techniques for integer-order models with uncertain parameters can be extended toward fractional counterparts. Selected linear as well as nonlinear illustrating examples conclude this paper to visualize algorithmic properties of the suggested interval-based simulation methodology and point out directions of ongoing research.

8 citations


Journal ArticleDOI
TL;DR: This work proposes here a set-membership method based on interval analysis to detect different types of discontinuities, including the sliding surface where the state trajectory jumps indefinitely between two distinct behaviors.
Abstract: When implementing a non-continuous controller for a cyber-physical system, it may happen that the evolution function of the closed-loop system is not anymore piecewise continuous along the trajectory, mainly due to if statements inside the control algorithm. As a consequence, an unwanted chattering effect may occur. This behavior is often difficult to observe even in simulation. We propose here a set-membership method based on interval analysis to detect different types of discontinuities. One of them is the sliding surface where the state trajectory jumps indefinitely between two distinct behaviors. As an application, we consider the validation of a sailboat controller. We show that our approach is able to detect and explain some unwanted sliding effects that may be observed in rare and specific situations on our actual sailboat robots.

4 citations


Journal ArticleDOI
TL;DR: It is shown how interval arithmetic approaches can be employed to solve the necessary optimality criteria for the fluid velocity reconstruction under the assumption of bounded measurement errors.
Abstract: Magnet resonance imaging does not only have a large number of applications in the field of medical examinations. In addition, several promising applications were also reported for the measurement of technical fluid flows and for the measurement of temperature fields in technical devices which do not allow for a classical access by either arrays of flow meters on the one hand or by arrays of temperature sensors such as thermocouples on the other hand. Due to the fact that magnet resonance imaging can be performed in a non-invasive manner, it has the advantage to provide relevant data without disturbing the velocity and temperature fields by external sensor devices. Moreover, measurement information can also be obtained for scenarios in which a direct access to the media under investigation is hardly possible due to constructive limitations. To make this kind of measurement applicable also for dynamic scenarios, not only the spatial resolution but also the temporal one needs to be sufficiently accurate. If the temporal resolution is of interest, an acceleration of the measurement process becomes possible by compressed sensing techniques which make use of an undersampling of the so-called $k$-space. However, such compressed sensing approaches require a reconstruction of the original fields of the physical variables to be measured. In this paper, it is shown how interval arithmetic approaches can be employed to solve the necessary optimality criteria for the fluid velocity reconstruction under the assumption of bounded measurement errors.

4 citations


Journal ArticleDOI
TL;DR: The verification of optimized control for elastic rod motion involves the local and integral error estimates proposed and a FEM solver for mechanical systems with varying distributed parameters and linear boundary conditions of different kinds is presented.
Abstract: To model vibrations in flexible structures, a variational formulation of PDE control problems is considered in the frame of the method of integrodifferential relations. This approach allows to estimate a posteriori the quality of finite-dimensional approximations and, as a result, either to refine or coarsen them if necessary. Such estimates also make it possible to correct the input signals. The related control law is regularized via a quadratic cost functional including the discrepancy of the constitutive equations. Procedures for solving optimization problems in dynamics of linear elasticity have been developed based on the Ritz method and FEM. The verification of optimized control for elastic rod motion involves the local and integral error estimates proposed. A FEM solver for mechanical systems with varying distributed parameters and linear boundary conditions of different kinds is presented.

4 citations


Journal ArticleDOI
TL;DR: This work considers the formulation and solution of static output feedback design problems using quantifier elimination techniques, as well as more specified eigenvalue placement scenarios, which are the focus of the paper.
Abstract: This contribution addresses the static output feedback problem of linear time-invariant systems. This is still an area of active research, in contrast to the observer-based state feedback problem, which has been solved decades ago. We consider the formulation and solution of static output feedback design problems using quantifier elimination techniques. Stabilization, as well as more specified eigenvalue placement scenarios, are the focus of the paper.

4 citations


Journal ArticleDOI
TL;DR: A new approach to design an interval observer for switched systems with time-varying parameters using a common quadratic Lyapunov function and guaranteeing both cooperativity and Input to State Stability of the upper and lower observation errors is proposed.
Abstract: State estimation for switched systems with time-varying parameters has received a great attention during the past decades. In this paper, a new approach to design an interval observer for this class of systems is proposed. The scheduling vector is described by a convex combination so that the parametric uncertainties belong into polytopes. The considered system is also subject to measurement noise and state disturbances which are supposed to be unknown but bounded.The proposed method guarantees both cooperativity and Input to State Stability (ISS) of the upper and lower observation errors. Sufficient conditions are given in terms of Linear Matrices Inequalities (LMIs) using a common quadratic Lyapunov function. Finally, a numerical example is provided to show the effectiveness of the designed observer.

3 citations


Journal ArticleDOI
TL;DR: This paper provides an application-oriented description of the major steps leading from a control-oriented system model with an associated verified parameter identification to a verified design of interval observers which provide the basis for the development and implementation of cooperativity-preserving feedback controllers.
Abstract: One of the most important advantages of interval observers is their capability to provide estimates for a given dynamic system model in terms of guaranteed state bounds which are compatible with measured data that are subject to bounded uncertainty. However, the inevitable requirement for being able to produce such verified bounds is the knowledge about a dynamic system model in which possible uncertainties and inaccuracies are themselves represented by guaranteed bounds. For that reason, classical point-valued parameter identification schemes are often not sufficient or should, at least, be handled with sufficient care if safety critical applications are of interest. This paper provides an application-oriented description of the major steps leading from a control-oriented system model with an associated verified parameter identification to a verified design of interval observers which provide the basis for the development and implementation of cooperativity-preserving feedback controllers. The corresponding computational steps are described and visualized for the temperature control of a laboratory-scale test rig available at the Chair of Mechatronics at the University of Rostock.

3 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of cooperative pose estimation in a group of N unmanned aerial vehicles, each equipped with a camera that sees landmarks with known positions, and aims to compute the pose domain of each robot assuming the errors on measurements are bounded.
Abstract: In this article we address the problem of cooperative pose estimation in a group of unmanned aerial vehicles (UAVs) in a bounded-error context. The UAVs are equipped with cameras to track landmarks, and with a communication and ranging system to cooperate with their neighbors. Measurements are represented by intervals, and constraints are expressed on the robots poses (positions and orientations). Pose domains subpavings are obtained by using set inversion via interval analysis. Each robot of the group first computes a pose domain using only its sensors measurements. Then, through position boxes exchanges, the positions are cooperatively refined by constraint propagation in the group. Results with real robot data are presented, and show that the position accuracy is improved thanks to cooperation.

3 citations


Journal ArticleDOI
TL;DR: This work presents set-valued algorithms to compute tight interval predictions of the state trajectories for a class of uncertain dynamical systems, where the dynamics is described as a sum of a linear and a nonlinear term.
Abstract: This work presents set-valued algorithms to compute tight interval predictions of the state trajectories for a certain class of uncertain dynamical systems. Based on interval analysis and the analytic expression of the state response of discrete-time linear systems, non-conservative numerical schemes are proposed. Moreover, under some stability conditions, the convergence of the width of the predicted state enclosures is proved. The performance of the proposed set-valued algorithms are illustrated through two numerical examples and the results are compared to that obtained with an other method selected from the literature.

2 citations


Journal ArticleDOI
TL;DR: The problem of determining optimal switching instants for the control of hybrid systems under reachability constraints is considered, cast into an interval global optimization problem with differential constraints, where validated simulation techniques and dynamic time meshing are used for its solution.
Abstract: The problem of determining optimal switching instants for the control of hybrid systems under reachability constraints is considered. This optimization problem is cast into an interval global optimization problem with differential constraints, where validated simulation techniques and dynamic time meshing are used for its solution. The approach is applied on two examples, one being the well-known example of the Goddard's problem where a rocket has to reach a given altitude while consuming the smallest amount of fuel.

Journal ArticleDOI
TL;DR: This work uses monadic transition systems in the category of sets as a framework for modeling discrete time systems and focuses on a combination of non-deterministic and probabilistic behaviors that often arises when modeling complex systems.
Abstract: Safety analysis of high confidence systems requires guaranteed bounds on the probabilities of events of interest. Establishing the correctness of algorithms that aim to compute such bounds is challenging. We address this problem in three steps. First, we use monadic transition systems (MTS) in the category of sets as a framework for modeling discrete time systems. MTS can capture different types of system behaviors, but we focus on a combination of non-deterministic and probabilistic behaviors that often arises when modeling complex systems. Second, we use the category of posets and monotonic maps as a setting to define and compare approximations. In particular, for the MTS of interest, we consider approximations of their configurations based on complete lattices. Third, by restricting to finite lattices, we obtain algorithms that compute over-approximations, i.e., bounds from above within some partial order of approximants, of the system configuration after n steps. Interestingly, finite lattices of “interval probabilities” may fail to accurately approximate configurations that are both non-deterministic and probabilistic, even for deterministic (and continuous) system dynamics. However, better choices of finite lattices are available.

Journal ArticleDOI
TL;DR: A set of tools that aim to support the work of static analyzer developers by making differential testing easier are presented, which includes tools for automatic test suite selection, automated differential experiments, coverage information of increased granularity, statistics collection, metric calculations, and visualizations, all resulting in a convenient, shareable HTML report.
Abstract: Program faults, best known as bugs, are practically unavoidable in today's ever growing software systems. One increasingly popular way of eliminating them, besides tests, dynamic analysis, and fuzzing, is using static analysis based bug-finding tools. Such tools are capable of finding surprisingly sophisticated bugs automatically by inspecting the source code. Their analysis is usually both unsound and incomplete, but still very useful in practice, as they can find non-trivial problems in a reasonable time (e.g. within hours, for an industrial project) without human intervention. Because the problems that static analyzers try to solve are hard, usually intractable, they use various approximations that need to be fine-tuned in order to grant a good user experience (i.e. as many interesting bugs with as few distracting false alarms as possible). For each newly introduced heuristic, this normally happens by performing differential testing of the analyzer on a lot of widely used open source software projects that are known to use related language constructs extensively. In practice, this process is ad hoc, error-prone, poorly reproducible and its results are hard to share. We present a set of tools that aim to support the work of static analyzer developers by making differential testing easier. Our framework includes tools for automatic test suite selection, automated differential experiments, coverage information of increased granularity, statistics collection, metric calculations, and visualizations, all resulting in a convenient, shareable HTML report.

Journal ArticleDOI
TL;DR: The technique for detecting uninitialized C++ variables using the Clang Static Analyzer is overviewed, and various heuristics to guess whether a specific variable was left in an undefined state intentionally are described.
Abstract: Uninitialized variables have been a source of errors since the beginning of software engineering. Some programming languages (e.g. Java and Python) will automatically zero-initialize such variables, but others, like C and C++, leave their state undefined. While laying aside initialization in C and C++ might be a performance advantage if an initial value can't be supplied, working with such variables is an undefined behavior, and is a common source of instabilities and crashes. To avoid such errors, whenever meaningful initialization is possible, it should be used. Tools for detecting these errors run time have existed for decades, but those require the problematic code to be executed. Since in many cases the number of possible execution paths are combinatoric, static analysis techniques emerged as an alternative. In this paper, we overview the technique for detecting uninitialized C++ variables using the Clang Static Analyzer, and describe various heuristics to guess whether a specific variable was left in an undefined state intentionally. We implemented a prototype tool based on our idea and successfully tested it on large open source projects.

Journal ArticleDOI
TL;DR: NPNCs give significantly smaller equation systems than JBNCs, at the cost of a non- constant mass matrix for fully 3D models—a minor downside in the DAETS context.
Abstract: The Natural Coordinates (NCs) method for Lagrangian modelling and simulation of multi-body systems is valued for giving simple, sparse models. We describe our version of it (NPNCs) and compare with the classical ap- proach of Jalon and Bayo (JBNCs). NPNCs use the high-index differential- algebraic equation solver DAETS. Algorithmic differentiation, not symbolic algebra, forms the equations of motion from the Lagrangian. NPNCs give significantly smaller equation systems than JBNCs, at the cost of a non- constant mass matrix for fully 3D models—a minor downside in the DAETS context. A 2D and a 3D example are presented, with numerical results.

Journal ArticleDOI
TL;DR: This paper proposes to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results by following the (meta-)design principles of a human-centered verification and validation assessment.
Abstract: Various evaluation approaches exist for multi-purpose visual analytics (VA) frameworks. They are based on empirical studies in information visualization or on community activities, for example, VA Science and Technology Challenge (2006-2014) created as a community evaluation resource to “decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators” 1 . In this paper, we propose to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results. For this purpose, trust in VA outcome should be established, which can be done by following the (meta-)design principles of a human-centered verification and validation assessment and also in dependence on users’ task models and interaction styles, since the possibility to work with the visualization interactively is an integral part of VA. To define reliable VA, we point out various dimensions of reliability along with their quality criteria, requirements, attributes and metrics. Several software packages are used to illustrate the concepts.

Journal ArticleDOI
TL;DR: Joint entropy proves to be better steganalysis measure with 93% detection accuracy and less false alarms with varying hiding ratio.
Abstract: Steganography hides the data within a media file in an imperceptible way. Steganalysis exposes steganography by using detection measures. Traditionally, Steganalysis revealed steganography by targeting perceptible and statistical properties which results in developing secure steganography schemes. In this work, we target LSB image steganography by using entropy and joint entropy metrics for steganalysis. First, the Embedded image is processed for feature extraction then analyzed by entropy and joint entropy with their corresponding original image. Second, SVM and Ensemble classifiers are trained according to the analysis results. The decision of classifiers discriminates cover image from stego image. This scheme is further applied on attacked stego image for checking detection reliability. Performance evaluation of proposed scheme is conducted over grayscale image datasets. We analyzed LSB embedded images by Comparing information gain from entropy and joint entropy metrics. Results conclude that entropy of the suspected image is more preserving than joint entropy. As before histogram attack, detection rate with entropy metric is 70% and 98% with joint entropy metric. However after an attack, entropy metric ends with 30% detection rate while joint entropy metric gives 93% detection rate. Therefore, joint entropy proves to be better steganalysis measure with 93% detection accuracy and less false alarms with varying hiding ratio.

Journal ArticleDOI
TL;DR: An improved PG-based RDH scheme is developed and presented and the computational model of its key processes are developed and Experimental results demonstrated that the proposedRDH scheme offers reasonably better embedding rate-distortion performance than the original scheme.
Abstract: Pixel Grouping (PG) of digital images has been a key consideration in recent development of the Reversible Data Hiding (RDH) schemes. While a PG kernel with neighborhood pixels helps compute image groups for better embedding rate-distortion performance, only horizontal neighborhood pixel group of size 1×3 has so far been considered. In this paper, we formulate PG kernels of sizes 3×1, 2×3 and 3×2 and investigate their effect on the rate-distortion performance of a prominent PG-based RDH scheme. Specially, a kernel of size 3×2 (or 2×3) that creates a pair of pixel-trios having triangular shape and offers a greater possible correlation among the pixels. This kernel thus can be better utilized for improving a PG-based RDH scheme. Considering this, we develop and present an improved PG-based RDH scheme and the computational models of its key processes. Experimental results demonstrated that our proposed RDH scheme offers reasonably better embedding rate-distortion performance than the original scheme.

Journal ArticleDOI
TL;DR: This study aims to introduce two alternative approaches which can extend the current method and can be applied simultaneously: determining loops worth to fully unroll with applied heuristics, and using a widening mechanism to simulate an arbitrary number of iteration steps.
Abstract: The LLVM Clang Static Analyzer is a source code analysis tool which aims to find bugs in C, C++, and Objective-C programs using symbolic execution, i.e. it simulates the possible execution paths of the code. Currently the simulation of the loops is somewhat naive (but efficient), unrolling the loops a predefined constant number of times. However, this approach can result in a loss of coverage in various cases. This study aims to introduce two alternative approaches which can extend the current method and can be applied simultaneously: (1) determining loops worth to fully unroll with applied heuristics, and (2) using a widening mechanism to simulate an arbitrary number of iteration steps. These methods were evaluated on numerous open source projects, and proved to increase coverage in most of the cases. This work also laid the infrastructure for future loop modeling improvements.

Journal ArticleDOI
TL;DR: The main idea is to find a joint approach for an interval-based gain scheduling controller while simultaneously reducing overestimation by enclosing state intervals with the least amount of conservativity by exploiting cooperativity and an exponential approach.
Abstract: In real-life applications, dynamic systems are often subject to uncertainty due to model simplifications, measurement inaccuracy or approximation errors which can be mapped to specific parameters. Uncertainty in dynamic systems can come either in stochastic forms or as interval representations. The latter is applied if the uncertainty is bounded as it will be done in this paper. The main idea is to find a joint approach for an interval-based gain scheduling controller while simultaneously reducing overestimation by enclosing state intervals with the least amount of conservativity. The robust and/ or optimal control design is realized using linear matrix inequalities (LMIs) to find an efficient solution and aims at a guaranteed stabilization of the system dynamics over a predefined time horizon. A temporal reduction of the widths of intervals representing worst-case bounds of the system states at a specific point of time should occur due to asymptotic stability proven by the employed LMI-based design. However, for commonly used approaches in the computation of interval enclosures, those interval widths seemingly blow up due to the wrapping effect in many cases. To avoid this, we provide two interval enclosure techniques — an exploitation of cooperativity and an exponential approach — and discuss their applicability taking into account two real-life applications, a high-bay rack feeder and an inverse pendulum.

Journal ArticleDOI
TL;DR: A mathematical model of the problem is presented and a two-phase graph coloring method for the crew rostering problem is introduced and the results have been compared to the solutions of the integer programming model for moderate-sized problems instances.
Abstract: In the last years personnel cost became a huge factor in the financial management of many companies and institutions.The firms are obligated to employ their workers in accordance with the law prescribing labour rules. The companies can save costs with minimizing the differences between the real and the expected worktimes. Crew rostering is assigning the workers to the previously determined shifts, which has been widely studied in the literature. In this paper, a mathematical model of the problem is presented and a two-phase graph coloring method for the crew rostering problem is introduced. Our method has been tested on artificially generated and real life input data. The results of the new algorithm have been compared to the solutions of the integer programming model for moderate-sized problems instances.

Journal ArticleDOI
TL;DR: The crypto–watermarking algorithm based on sparse sampling to be implemented during the analog to digital conversion process only is proposed and results in good robustness against various signal attacks such as echo addition, noise addition, reverberation etc.
Abstract: In the recent era the growth of technology is tremendous and at the same time, the misuse of technology is also increasing with an equal scale. Thus the owners have to protect the multimedia data from the malicious and piracy. This has led the researchers to the new era of cryptography and watermarking. In the traditional security algorithm for the audio, the algorithm is implemented on the digital data after the traditional analog to digital conversion. But in this article, we propose the crypto – watermarking algorithm based on sparse sampling to be implemented during the analog to digital conversion process only. The watermark is generated by exploiting the structure of HAAR transform. The performance of the algorithm is tested on various audio signals and the obtained SNR is greater than 30dB and the algorithm results in good robustness against various signal attacks such as echo addition, noise addition, reverberation etc.

Journal ArticleDOI
TL;DR: A method to compute the return types for simple recursive functions in Scala by making a heuristic assumption on the return type based on the non-recursive execution branches and providing a proof of the correctness of this method.
Abstract: Scala is a well-established multi-paradigm programming language known for its terseness that includes advanced type inference features. Unfortunately this type inferring algorithm does not support typing of recursive functions. This is both against the original design philosophies of Scala and puts an unnecessary burden on the programmer. In this paper we propose a method to compute the return types for simple recursive functions in Scala. We make a heuristic assumption on the return type based on the non-recursive execution branches and provide a proof of this method's correctness. The algorithm does not have a significant effect on the compilation speed. We implemented our method as an extension prototype in the Scala compiler and used it to successfully test our method on various examples. The compiler extension prototype is available for further tests.

Journal ArticleDOI
TL;DR: This work proves the existence of non-trivial solutions of the one-dimensional stationary Schr¨odinger-Poisson system using computer-assisted methods and uses the Rayleigh-Ritz method and a corollary of the Temple-Lehmann Theorem to get enclosures of the crucial eigenvalues of the linearization below the essential spectrum.
Abstract: Motivated by the three-dimensional time-dependent Schr¨odinger-Poisson system we prove the existence of non-trivial solutions of the one-dimensional stationary Schr¨odinger-Poisson system using computer-assisted methods. Starting from a numerical approximate solution, we compute a bound for its defect, and a norm bound for the inverse of the linearization at the approximate solution. For the latter, eigenvalue bounds play a crucial role, especially for the eigenvalues “close to” zero. Therefor, we use the Rayleigh-Ritz method and a corollary of the Temple-Lehmann Theorem to get enclosures of the crucial eigenvalues of the linearization below the essential spectrum. With these data in hand, we can use a fixed-point argument to obtain the desired existence of a non-trivial solution “nearby” the approximate one. In addition to the pure existence result, the used methods also provide an enclosure of the exact solution.

Journal ArticleDOI
TL;DR: This article gives another, somewhat simpler and more straightforward proof of the undecidability of the problem by using the same source of reductions as Post did, and investigates these, very different, techniques, and point out out some peculiarities in the approach used by Post.
Abstract: In 1946, Emil Leon Post (Bulletin of Amer. Math. Soc. 52 (1946), 264-268) introduced his famouscorrespondence decision problem, nowadays known as the Post Correspondence Problem (PCP).Post proved the undecidability of the PCP by areduction from his normal systems. In the presentarticle we follow the steps of Post, and give another, somewhat simpler and more straightforwardproof of the undecidability of the problem by using the same source of reductions as Post did.We investigate these, very different, techniques, and point out out some peculiarities in theapproach used by Post.

Journal ArticleDOI
TL;DR: This article presents an extension of the Zelus compiler to generate interval-based guaranteed simulations of IVPs using DynIbex, which is conservative since it does not break the existing compilation workflow.
Abstract: Modeling continuous-time dynamical systems is a complex task. Fortunately some dedicated programming languages exist to ease this work. Zelus is one such language that generates a simulation executable which can be used to study the behavior of the modeled system. However, such simulations cannot handle uncertainties on some parameters of the system. This makes it necessary to run multiple simulations to check that the system fulfills particular requirements (safety for instance) for all the values in the uncertainty ranges. Interval-based guaranteed integration methods provide a solution to this problem. The DynIbex library provides such methods but it requires a manual encoding of the system in a general purpose programming language (C++). This article presents an extension of the Zelus compiler to generate interval-based guaranteed simulations of IVPs using DynIbex. This extension is conservative since it does not break the existing compilation workflow.

Journal ArticleDOI
TL;DR: In this paper, the behavior of the true dimension of subfield subcodes of Hermitian codes was studied and it was concluded that they can be estimated by the extreme value distribution function.
Abstract: In this paper, we study the behavior of the true dimension of the subfield subcodes of Hermitian codes. Our motivation is to use these classes of linear codes to improve the parameters of the McEliece cryptosystem, suchas key size and security level. The McEliece scheme is one of the promising alternative cryptographic schemes to the current public key schemes since in the last four decades, they resisted all known quantum computing attacks. By computing and analyzing a data collection of true dimensions of subfield subcodes, we concluded that they can be estimated by the extreme value distribution function.