scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 2019"


Proceedings ArticleDOI
01 Oct 2019
TL;DR: A new approach of domain randomization and pyramid consistency to learn a model with high generalizability for semantic segmentation of real-world self-driving scenes in a domain generalization fashion is proposed.
Abstract: We propose to harness the potential of simulation for semantic segmentation of real-world self-driving scenes in a domain generalization fashion. The segmentation network is trained without any information about target domains and tested on the unseen target domains. To this end, we propose a new approach of domain randomization and pyramid consistency to learn a model with high generalizability. First, we propose to randomize the synthetic images with styles of real images in terms of visual appearances using auxiliary datasets, in order to effectively learn domain-invariant representations. Second, we further enforce pyramid consistency across different "stylized" images and within an image, in order to learn domain-invariant and scale-invariant features, respectively. Extensive experiments are conducted on generalization from GTA and SYNTHIA to Cityscapes, BDDS, and Mapillary; and our method achieves superior results over the state-of-the-art techniques. Remarkably, our generalization results are on par with or even better than those obtained by state-of-the-art simulation-to-real domain adaptation methods, which access the target domain data at training time.

228 citations


Journal ArticleDOI
TL;DR: Test results show that an automated vehicle with optimized plant and controller can perform its tasks well under aggressive, moderate, and conservative driving styles, further improving the overall performance.
Abstract: This paper studies the codesign optimization approach to determine how to optimally adapt automatic control of an intelligent electric vehicle to driving styles. A cyber-physical system (CPS)-based framework is proposed for codesign optimization of the plant and controller parameters for an automated electric vehicle, in view of vehicle's dynamic performance, drivability, and energy along with different driving styles. System description, requirements, constraints, optimization objectives, and methodology are investigated. Driving style recognition algorithm is developed using unsupervised machine learning and validated via vehicle experiments. Adaptive control algorithms are designed for three driving styles with different protocol selections. Performance exploration method is presented. Parameter optimizations are implemented based on the defined objective functions. Test results show that an automated vehicle with optimized plant and controller can perform its tasks well under aggressive, moderate, and conservative driving styles, further improving the overall performance. The results validate the feasibility and effectiveness of the proposed CPS-based codesign optimization approach.

213 citations


Proceedings ArticleDOI
08 Jun 2019
TL;DR: A domain-specific language, Scenic, is designed for describing scenarios that are distributions over scenes, as a probabilistic programming language, that allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene.
Abstract: We propose a new probabilistic programming language for the design and analysis of perception systems, especially those based on machine learning. Specifically, we consider the problems of training a perception system to handle rare events, testing its performance under different conditions, and debugging failures. We show how a probabilistic programming language can help address these problems by specifying distributions encoding interesting types of inputs and sampling these to generate specialized training and test sets. More generally, such languages can be used for cyber-physical systems and robotics to write environment models, an essential prerequisite to any formal analysis. In this paper, we focus on systems like autonomous cars and robots, whose environment is a scene, a configuration of physical objects and agents. We design a domain-specific language, Scenic, for describing scenarios that are distributions over scenes. As a probabilistic programming language, Scenic allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene. We develop specialized techniques for sampling from the resulting distribution, taking advantage of the structure provided by Scenic's domain-specific syntax. Finally, we apply Scenic in a case study on a convolutional neural network designed to detect cars in road images, improving its performance beyond that achieved by state-of-the-art synthetic data generation methods.

178 citations


Posted Content
TL;DR: In this paper, a new approach of domain randomization and pyramid consistency is proposed to learn a model with high generalizability for semantic segmentation of real-world self-driving scenes.
Abstract: We propose to harness the potential of simulation for the semantic segmentation of real-world self-driving scenes in a domain generalization fashion. The segmentation network is trained without any data of target domains and tested on the unseen target domains. To this end, we propose a new approach of domain randomization and pyramid consistency to learn a model with high generalizability. First, we propose to randomize the synthetic images with the styles of real images in terms of visual appearances using auxiliary datasets, in order to effectively learn domain-invariant representations. Second, we further enforce pyramid consistency across different "stylized" images and within an image, in order to learn domain-invariant and scale-invariant features, respectively. Extensive experiments are conducted on the generalization from GTA and SYNTHIA to Cityscapes, BDDS and Mapillary; and our method achieves superior results over the state-of-the-art techniques. Remarkably, our generalization results are on par with or even better than those obtained by state-of-the-art simulation-to-real domain adaptation methods, which access the target domain data at training time.

37 citations


Book ChapterDOI
17 Oct 2019
TL;DR: It is shown how the coordination of reactors can safely and transparently exploit parallelism, both in shared-memory and distributed systems.
Abstract: This paper describes a component-based concurrent model of computation for reactive systems. The components in this model, featuring ports and hierarchy, are called reactors. The model leverages a semantic notion of time, an event scheduler, and a synchronous-reactive style of communication to achieve determinism. Reactors enable a programming model that ensures determinism, unless explicitly abandoned by the programmer. We show how the coordination of reactors can safely and transparently exploit parallelism, both in shared-memory and distributed systems.

36 citations


Proceedings ArticleDOI
26 Jul 2019
TL;DR: It is shown that the encoder-decoder model is able to identify the injected anomalies in a modern AM manufacturing process in an unsupervised fashion and gives hints about the temperature non-uniformity of the testbed during manufacturing, which was not previously known prior to the experiment.
Abstract: We present a novel unsupervised deep learning approach that utilizes an encoder-decoder architecture for detecting anomalies in sequential sensor data collected during industrial manufacturing. Our approach is designed to not only detect whether there exists an anomaly at a given time step, but also to predict what will happen next in the (sequential) process. We demonstrate our approach on a dataset collected from a real-world Additive Manufacturing (AM) testbed. The dataset contains infrared (IR) images collected under both normal conditions and synthetic anomalies. We show that our encoder-decoder model is able to identify the injected anomalies in a modern AM manufacturing process in an unsupervised fashion. In addition, our approach also gives hints about the temperature non-uniformity of the testbed during manufacturing, which was not previously known prior to the experiment.

30 citations


Posted Content
TL;DR: A unifying formalization of the adversarial input generation process from a formal methods perspective is proposed and a definition of robustness is provided that is general enough to capture different formulations.
Abstract: Deep neural networks have been shown to lack robustness to small input perturbations. The process of generating the perturbations that expose the lack of robustness of neural networks is known as adversarial input generation. This process depends on the goals and capabilities of the adversary, In this paper, we propose a unifying formalization of the adversarial input generation process from a formal methods perspective. We provide a definition of robustness that is general enough to capture different formulations. The expressiveness of our formalization is shown by modeling and comparing a variety of adversarial attack techniques.

26 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: This paper proposes using Monte Carlo dropout (MC-dropout) to enhance the supervised learning pipeline, so that the resulting neural network is able to detect and diagnose unseen incipient fault examples.
Abstract: Early detection of incipient faults is of vital importance to reducing maintenance costs, saving energy, and enhancing occupant comfort in buildings. Popular supervised learning models such as deep neural networks are considered promising due to their ability to directly learn from labeled fault data; however, it is known that the performance of supervised learning approaches highly relies on the availability and quality of labeled training data. In Fault Detection and Diagnosis (FDD) applications, the lack of labeled incipient fault data has posed a major challenge to applying these supervised learning techniques to commercial buildings. To overcome this challenge, this paper proposes using Monte Carlo dropout (MC-dropout) to enhance the supervised learning pipeline, so that the resulting neural network is able to detect and diagnose unseen incipient fault examples. We also examine the proposed MC-dropout method on the RP-1043 dataset to demonstrate its effectiveness in indicating the most likely incipient fault types.

22 citations


Proceedings ArticleDOI
18 Feb 2019
TL;DR: The results on the C-MAPSS dataset demonstrate that OC-SVM can achieve satisfactory accuracy in detecting change point in time series with fewer training data, compared to state-of-the-art deep learning approaches.
Abstract: Identifying the change point of a system’s health status is important. Indeed, a change point usually signifies an incipient fault under development. The One-Class Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly detection that could be used for identifying change points; however, it is sometimes difficult to obtain a good OC-SVM model that can be used on sensor measurement time series to identify the change points in system health status. In this paper, we propose a novel approach for calibrating OC-SVM models. Our approach uses a heuristic search method to find a good set of input data and hyperparameters that yield a well-performing model. Our results on the C-MAPSS dataset demonstrate that OC-SVM can achieve satisfactory accuracy in detecting change point in time series with fewer training data, compared to state-of-the-art deep learning approaches. In our case study, the OC-SVM calibrated by the proposed model is shown to be useful especially in scenarios with limited amount of training data.

17 citations


Journal ArticleDOI
TL;DR: In this paper, an assume-guarantee contract framework for cyber-physical system design under probabilistic requirements is presented, where a stochastic linear system and a set of requirements captured by bounded Stochastic Signal Temporal Logic contracts are considered.
Abstract: We present an assume-guarantee contract framework for cyber-physical system design under probabilistic requirements. Given a stochastic linear system and a set of requirements captured by bounded Stochastic Signal Temporal Logic (StSTL) contracts, we propose algorithms to check contract compatibility, consistency, and refinement, and generate a sequence of control inputs that satisfies a contract. We leverage encodings of the verification and control synthesis tasks into mixed integer optimization problems, and conservative approximations of probabilistic constraints that produce sound and tractable problem formulations. We illustrate the effectiveness of our approach on three case studies, including the design of controllers for aircraft power distribution networks.

17 citations


Posted Content
TL;DR: In this paper, a heuristic search method was proposed to find a good set of input data and hyperparameters that yield a well-performing model for detecting change points in time series with fewer training data.
Abstract: It is important to identify the change point of a system's health status, which usually signifies an incipient fault under development. The One-Class Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly detection and hence could be used for identifying change points; however, it is sometimes difficult to obtain a good OC-SVM model that can be used on sensor measurement time series to identify the change points in system health status. In this paper, we propose a novel approach for calibrating OC-SVM models. The approach uses a heuristic search method to find a good set of input data and hyperparameters that yield a well-performing model. Our results on the C-MAPSS dataset demonstrate that OC-SVM can also achieve satisfactory accuracy in detecting change point in time series with fewer training data, compared to state-of-the-art deep learning approaches. In our case study, the OC-SVM calibrated by the proposed model is shown to be useful especially in scenarios with limited amount of training data.

Journal ArticleDOI
TL;DR: This paper cast contract models into a coherent formalism used to derive general results about reuse and distributed development in the context of knowledge representation.
Abstract: Contract models have been proposed to promote and facilitate reuse and distributed development. In this paper, we cast contract models into a coherent formalism used to derive general results about the properties of their operators. We study several extensions of the basic model, including the distinction between weak and strong assumptions and maximality of the specification. We then analyze the disjunction and conjunction operators, and show how they can be broken up into a sequence of simpler operations. This leads to the definition of a new contract viewpoint merging operator, which better captures the design intent in contrast to the more traditional conjunction. The adjoint operation, which we call separation, can be used to re-partition the specification into different viewpoints. We show the symmetries of these operations with respect to composition and quotient.

Proceedings ArticleDOI
16 Apr 2019
TL;DR: It is shown that modifying a reach-avoid specification with SPEC allows us to synthesize a safe controller for a larger set of environments compared to SSM, and a probabilistic method is proposed to compute SPEC for a general class of systems.
Abstract: We consider the problem of extracting safe environments and controllers for reach-avoid objectives for systems with known state and control spaces, but unknown dynamics. In a given environment, a common approach is to synthesize a controller from an abstraction or a model of the system (potentially learned from data). However, in many situations, the relationship between the dynamics of the model and the actual system is not known; and hence it is difficult to provide safety guarantees for the system. In such cases, the Standard Simulation Metric (SSM), defined as the worst-case norm distance between the model and the system output trajectories, can be used to modify a reach-avoid specification for the system into a more stringent specification for the abstraction. Nevertheless, the obtained distance, and hence the modified specification, can be quite conservative. This limits the set of environments for which a safe controller can be obtained. We propose SPEC, a specification-centric simulation metric, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system. We show that modifying a reach-avoid specification with SPEC allows us to synthesize a safe controller for a larger set of environments compared to SSM. We also propose a probabilistic method to compute SPEC for a general class of systems. Case studies using simulators for quadrotors and autonomous cars illustrate the advantages of the proposed metric for determining safe environment sets and controllers.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this paper, the authors explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-metal servers or cloud computing platforms using off-the-shelf technologies, and demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Abstract: We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-metal servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.

Proceedings ArticleDOI
TL;DR: It is demonstrated that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Abstract: We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-metal servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.


Posted Content
TL;DR: The proposed novel approach of augmenting the classification model with an additional unsupervised learning task leads to improved fault detection and diagnosis performance, especially on out-of-distribution examples including both incipient and unknown faults.
Abstract: The Monte Carlo dropout method has proved to be a scalable and easy-to-use approach for estimating the uncertainty of deep neural network predictions. This approach was recently applied to Fault Detection and Di-agnosis (FDD) applications to improve the classification performance on incipient faults. In this paper, we propose a novel approach of augmenting the classification model with an additional unsupervised learning task. We justify our choice of algorithm design via an information-theoretical analysis. Our experimental results on three datasets from diverse application domains show that the proposed method leads to improved fault detection and diagnosis performance, especially on out-of-distribution examples including both incipient and unknown faults.

Proceedings ArticleDOI
10 Jul 2019
TL;DR: A machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation using a learning model similar to Support Vector Machines and proposing a sampling algorithm that efficiently trains the model.
Abstract: We present a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation. The novelty of our approach resides in approximating the feasible region of uncertainty with an ellipsoid. We formulate this problem using a learning model similar to Support Vector Machines (SVM) and propose a sampling algorithm that efficiently trains the model. We demonstrate our approach on a voltage regulation problem using standard IEEE distribution test feeders.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This paper presents a two-stage computationally feasible multi-objective calibration process that achieves a high degree of realism for both traffic flow and average risk.
Abstract: Microscopic traffic simulation has become an important tool to investigate traffic efficiency and road safety. In order to produce meaningful results, driver behaviour models need to be carefully calibrated to represent real world conditions. If this type of simulations are to be used to evaluate safety features of traffic, on top of macroscopic relationships such as the speed-density diagram, they should also adequately represent the average risk of accidents occurring on the road. In this paper, we present a two-stage computationally feasible multi-objective calibration process. The first stage performs a parameter sensitivity analysis to select only parameters with considerable effect on the respective objective functions. The second stage employs a multi-objective genetic algorithm utilizing only few influential parameters that produces a front of Pareto optimal solutions with respect to the conflicting objective functions. Compared to traditional methods which focus on only one objective while sacrificing the accuracy of the other, our method achieves a high degree of realism for both traffic flow and average risk.

Journal ArticleDOI
TL;DR: This paper proposes a scalable methods for solving the problem of bounded synthesis from libraries, proposing a solution based on the CounterExample-Guided Inductive Synthesis paradigm and presents a parallel implementation which instantiates components defined as Linear Temporal Logic-based Assume/Guarantee Contracts.

Posted Content
TL;DR: This paper presents a specifically developed orchestration tool to manage the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-metal servers or (edge) cloud computing platforms and can manage the execution of containerized applications on shared resources without compromising hard real-time execution determinism.
Abstract: Industry 4.0 is changing fundamentally the way data is collected, stored and analyzed in industrial processes. While this change enables novel application such as flexible manufacturing of highly customized products, the real-time control of these processes, however, has not yet realized its full potential. We believe that modern virtualization techniques, specifically application containers, present a unique opportunity to decouple control functionality from associated hardware. Through it, we can fully realize the potential for highly distributed and transferable industrial processes even with real-time constraints arising from time-critical sub-processes. In this paper, we present a specifically developed orchestration tool to manage the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-metal servers or (edge) cloud computing platforms. Using off-the-shelf technology, the proposed tool can manage the execution of containerized applications on shared resources without compromising hard real-time execution determinism. Through first experimental results, we confirm the viability and analyzed the behavior of resource shared systems with strict real-time requirements. We then describe experiments set out to deliver expected results and gather performance, application scope and limits of the presented approach.

Proceedings ArticleDOI
TL;DR: In this paper, the authors propose a specification-centric simulation metric, SPEC, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system, and show that modifying a reach-avoid specification with SPEC allows to synthesize a safe controller for a larger set of environments compared to SSM.
Abstract: We consider the problem of extracting safe environments and controllers for reach-avoid objectives for systems with known state and control spaces, but unknown dynamics. In a given environment, a common approach is to synthesize a controller from an abstraction or a model of the system (potentially learned from data). However, in many situations, the relationship between the dynamics of the model and the \textit{actual system} is not known; and hence it is difficult to provide safety guarantees for the system. In such cases, the Standard Simulation Metric (SSM), defined as the worst-case norm distance between the model and the system output trajectories, can be used to modify a reach-avoid specification for the system into a more stringent specification for the abstraction. Nevertheless, the obtained distance, and hence the modified specification, can be quite conservative. This limits the set of environments for which a safe controller can be obtained. We propose SPEC, a specification-centric simulation metric, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system. We show that modifying a reach-avoid specification with SPEC allows us to synthesize a safe controller for a larger set of environments compared to SSM. We also propose a probabilistic method to compute SPEC for a general class of systems. Case studies using simulators for quadrotors and autonomous cars illustrate the advantages of the proposed metric for determining safe environment sets and controllers.

Posted Content
TL;DR: In this article, a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation is presented, where the novelty of the approach resides in approximating the feasible region of uncertainty with an ellipsoid.
Abstract: We present a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation. The novelty of our approach resides in approximating the feasible region of uncertainty with an ellipsoid. We formulate this problem using a learning model similar to Support Vector Machines (SVM) and propose a sampling algorithm that efficiently trains the model. We demonstrate our approach on a voltage regulation problem using standard IEEE distribution test feeders.