scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 2018"


Book
28 Mar 2018
TL;DR: This paper intends to provide treatment where contracts are precisely defined and characterized so that they can be used in design methodologies such as the ones mentioned above with no ambiguity, and provides an important link between interfaces and contracts to show similarities and correspondences.
Abstract: Recently, contract-based design has been proposed as an “orthogonal” approach that complements system design methodologies proposed so far to cope with the complexity of system design. Contract-based design provides a rigorous scaffolding for verification, analysis, abstraction/refinement, and even synthesis. A number of results have been obtained in this domain but a unified treatment of the topic that can help put contract-based design in perspective was missing. This monograph intends to provide such a treatment where contracts are precisely defined and characterized so that they can be used in design methodologies with no ambiguity. In particular, this monograph identifies the essence of complex system design using contracts through a mathematical “meta-theory”, where all the properties of the methodology are derived from a very abstract and generic notion of contract. We show that the meta-theory provides deep and illuminating links with existing contract and interface theories, as well as guidelines for designing new theories. Our study encompasses contracts for both software and systems, with emphasis on the latter. We illustrate the use of contracts with two examples: requirement engineering for a parking garage management, and the development of contracts for timing and scheduling in the context of the AUTOSAR methodology in use in the automotive sector.

238 citations


Proceedings ArticleDOI
05 Jun 2018
TL;DR: Li et al. as mentioned in this paper presented a framework to rapidly create point clouds with accurate point-level labels from a computer game, which can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can also be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining.
Abstract: 3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.

140 citations


Journal ArticleDOI
TL;DR: A novel multi-modal Luenberger (MML) observer based on efficient Satisfiability Modulo Theory (SMT) solving is proposed and an efficient SMT-based decision procedure is developed able to reason about the estimates of the MML observer to detect at runtime which sets of sensors are attack-free, and use them to obtain a correct state estimate.
Abstract: We introduce a scalable observer architecture, which can efficiently estimate the states of a discrete-time linear-time-invariant system whose sensors are manipulated by an attacker, and is robust to measurement noise. Given an upper bound on the number of attacked sensors, we build on previous results on necessary and sufficient conditions for state estimation, and propose a novel Multi-Modal Luenberger (MML) observer based on efficient Satisfiability Modulo Theory (SMT) solving. We present two techniques to reduce the complexity of the estimation problem. As a first strategy, instead of a bank of distinct observers, we use a family of filters sharing a single dynamical equation for the states, but different output equations, to generate estimates corresponding to different subsets of sensors. Such an architecture can reduce the memory usage of the observer from an exponential to a linear function of the number of sensors. We then develop an efficient SMT-based decision procedure that is able to reason about the estimates of the MML observer to detect at runtime which sets of sensors are attack-free, and use them to obtain a correct state estimate. Finally, we discuss two optimization-based algorithms that can efficiently select the observer parameters with the goal of minimizing the sensitivity of the estimates with respect to sensor noise. We provide proofs of convergence for our estimation algorithm and report simulation results to compare its runtime performance with alternative techniques. We show that our algorithm scales well for large systems (including up to 5,000 sensors) for which many previously proposed algorithms are not implementable due to excessive memory and time requirements. Finally, we illustrate the effectiveness of our approach, both in terms of resiliency to attacks and robustness to noise, on the design of large-scale power distribution networks.

91 citations


Posted Content
TL;DR: In this paper, point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining.
Abstract: 3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic calibration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.

68 citations


Journal ArticleDOI
17 Sep 2018
TL;DR: This paper identifies, abstract, and formalize components of smart buildings, and presents a design flow that maps high-level specifications of desired building applications to their physical implementations under the PBD framework.
Abstract: Smart buildings today are aimed at providing safe, healthy, comfortable, affordable, and beautiful spaces in a carbon and energy-efficient way. They are emerging as complex cyber–physical systems with humans in the loop. Cost, the need to cope with increasing functional complexity, flexibility, fragmentation of the supply chain, and time-to-market pressure are rendering the traditional heuristic and ad hoc design paradigms inefficient and insufficient for the future. In this paper, we present a platform-based methodology for smart building design. Platform-based design (PBD) promotes the reuse of hardware and software on shared infrastructures, enables rapid prototyping of applications, and involves extensive exploration of the design space to optimize design performance. In this paper, we identify, abstract, and formalize components of smart buildings, and present a design flow that maps high-level specifications of desired building applications to their physical implementations under the PBD framework. A case study on the design of on-demand heating, ventilation, and air conditioning (HVAC) systems is presented to demonstrate the use of PBD.

63 citations


Journal ArticleDOI
07 Aug 2018
TL;DR: A new satisfiability modulo convex programming (SMC) framework that integrates SAT solving and convex optimization to efficiently reason about Boolean and conveX constraints at the same time, and can handle more complex problem instances than state-of-the-art alternative techniques based on SMT solving and mixed integer convex Programming.
Abstract: The design of cyber–physical systems (CPSs) requires methods and tools that can efficiently reason about the interaction between discrete models, e.g., representing the behaviors of “cyber” components, and continuous models of physical processes. Boolean methods such as satisfiability (SAT) solving are successful in tackling large combinatorial search problems for the design and verification of hardware and software components. On the other hand, problems in control, communications, signal processing, and machine learning often rely on convex programming as a powerful solution engine. However, despite their strengths, neither approach would work in isolation for CPSs. In this paper, we present a new satisfiability modulo convex programming (SMC) framework that integrates SAT solving and convex optimization to efficiently reason about Boolean and convex constraints at the same time. We exploit the properties of a class of logic formulas over Boolean and nonlinear real predicates, termed monotone satisfiability modulo convex formulas, whose satisfiability can be checked via a finite number of convex programs. Following the lazy satisfiability modulo theory (SMT) paradigm, we develop a new decision procedure for monotone SMC formulas, which coordinates SAT solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. A key step in our coordination scheme is the efficient generation of succinct infeasibility proofs for inconsistent constraints that can support conflict-driven learning and accelerate the search. We demonstrate our approach on different CPS design problems, including spacecraft docking mission control, robotic motion planning, and secure state estimation. We show that SMC can handle more complex problem instances than state-of-the-art alternative techniques based on SMT solving and mixed integer convex programming.

53 citations


Posted Content
25 Sep 2018
TL;DR: This paper designs a domain-specific language, Scenic, for describing "scenarios" that are distributions over scenes, a probabilistic programming language, that allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene.
Abstract: Synthetic data has proved increasingly useful in both training and testing machine learning models such as neural networks. The major problem in synthetic data generation is producing meaningful data that is not simply random but reflects properties of real-world data or covers particular cases of interest. In this paper, we show how a probabilistic programming language can be used to guide data synthesis by encoding domain knowledge about what data is useful. Specifically, we focus on data sets arising from "scenes", configurations of physical objects; for example, images of cars on a road. We design a domain-specific language, Scenic, for describing "scenarios" that are distributions over scenes. The syntax of Scenic makes it easy to specify complex relationships between the positions and orientations of objects. As a probabilistic programming language, Scenic allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene. A Scenic scenario thereby implicitly defines a distribution over scenes, and we formulate the problem of sampling from this distribution as "scene improvisation". We implement an improviser for Scenic scenarios and apply it in a case study generating synthetic data sets for a convolutional neural network designed to detect cars in road images. Our experiments demonstrate the usefulness of our approach by using Scenic to analyze and improve the performance of the network in various scenarios.

45 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: In this article, Dreossi, Tommaso; Ghosh, Shromona; Yue, Xiangyu; Keutzer, Kurt; Sangiovanni-Vincentelli, Alberto L; Seshia, Sanjit A.
Abstract: Author(s): Dreossi, Tommaso; Ghosh, Shromona; Yue, Xiangyu; Keutzer, Kurt; Sangiovanni-Vincentelli, Alberto L; Seshia, Sanjit A | Editor(s): Lang, Jerome

45 citations


Proceedings ArticleDOI
19 Mar 2018
TL;DR: CHASE is presented, a framework for requirement capture, formalization, and validation for cyber-physical systems that combines a practical front-end formal specification language based on patterns with a rigorous verification back-end based on assume-guarantee contracts.
Abstract: This paper presents CHASE, a framework for requirement capture, formalization, and validation for cyber-physical systems. CHASE combines a practical front-end formal specification language based on patterns with a rigorous verification back-end based on assume-guarantee contracts. The front-end language can express temporal properties of networks using a declarative style, and supports automatic translation from natural-language constructs to low-level mathematical languages. The verification back-end leverages the mathematical formalism of contracts to reason about system requirements and determine inconsistencies and dependencies between them. CHASE features a modular and extensible software infrastructure that can support different domain-specific languages, modeling formalisms, and analysis tools. We illustrate its effectiveness on industrial design examples, including control of aircraft power distribution networks and arbitration of a mixed-criticality automotive bus.

31 citations


Journal ArticleDOI
17 Sep 2018
TL;DR: It is argued that the codesign of the cyber and physical components would expose solutions that are better under all aspects, such as safety, efficiency, security, performance, reliability, fault tolerance, and extensibility.
Abstract: Cyber–physical system (CPS) analysis and design are challenging due to the intrinsic heterogeneity of those systems. Today, CPSs are often designed by leveraging existing solutions and by adding cyber components to an existing physical system, thus decomposing the design into two separate phases. In this paper, we argue that the codesign of the cyber and physical components would expose solutions that are better under all aspects, such as safety, efficiency, security, performance, reliability, fault tolerance, and extensibility. To do so, automated codesign tools are a necessity due to the complexity of the problems at hand. In the paper, we will discuss the key needs and challenges in developing modeling, simulation, synthesis, validation, and verification tools for CPS codesign, present promising codesign approaches from our teams and others, and point out where additional research is needed.

29 citations


Proceedings ArticleDOI
TL;DR: Scenic as mentioned in this paper is a probabilistic programming language for the design and analysis of perception systems, especially those based on machine learning, which can be used for cyber-physical systems and robotics to write environment models, an essential prerequisite to any formal analysis.
Abstract: We propose a new probabilistic programming language for the design and analysis of perception systems, especially those based on machine learning. Specifically, we consider the problems of training a perception system to handle rare events, testing its performance under different conditions, and debugging failures. We show how a probabilistic programming language can help address these problems by specifying distributions encoding interesting types of inputs and sampling these to generate specialized training and test sets. More generally, such languages can be used for cyber-physical systems and robotics to write environment models, an essential prerequisite to any formal analysis. In this paper, we focus on systems like autonomous cars and robots, whose environment is a "scene", a configuration of physical objects and agents. We design a domain-specific language, Scenic, for describing "scenarios" that are distributions over scenes. As a probabilistic programming language, Scenic allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene. We develop specialized techniques for sampling from the resulting distribution, taking advantage of the structure provided by Scenic's domain-specific syntax. Finally, we apply Scenic in a case study on a convolutional neural network designed to detect cars in road images, improving its performance beyond that achieved by state-of-the-art synthetic data generation methods.

Posted Content
TL;DR: The efficacy of the proposed framework for augmenting data sets for machine learning based on counterexamples is compared to classical augmentation techniques on a case study of object detection in autonomous driving based on deep neural networks.
Abstract: We present a novel framework for augmenting data sets for machine learning based on counterexamples. Counterexamples are misclassified examples that have important properties for retraining and improving the model. Key components of our framework include a counterexample generator, which produces data items that are misclassified by the model and error tables, a novel data structure that stores information pertaining to misclassifications. Error tables can be used to explain the model's vulnerabilities and are used to efficiently generate counterexamples for augmentation. We show the efficacy of the proposed framework by comparing it to classical augmentation techniques on a case study of object detection in autonomous driving based on deep neural networks.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: The quotient set and its related operation can be used in any compositional methodology where design requirements are mapped into a set of components in a library for the so called missing component problem.
Abstract: We introduce a novel notion of quotient set for a pair of contracts and the operation of quotient for assume-guarantee contracts. The quotient set and its related operation can be used in any compositional methodology where design requirements are mapped into a set of components in a library. In particular, they can be used for the so called missing component problem, where the given components are not capable of discharging the obligations of the requirements. In this case, the quotient operation identifies the contract for a component that, if added to the original set, makes the resulting system fulfill the requirements.

Book ChapterDOI
10 Nov 2018
TL;DR: In this paper, the authors propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data, and then map a time series to a surface in the parameter space of the formula.
Abstract: Cyber-physical systems of today are generating large volumes of time-series data. As manual inspection of such data is not tractable, the need for learning methods to help discover logical structure in the data has increased. We propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data. The key idea is to then map a time series to a surface in the parameter space of the formula. Given this mapping, we identify the Hausdorff distance between surfaces as a natural distance metric between two time-series data under the lens of the parametric specification. This enables embedding non-trivial domain-specific knowledge into the distance metric and then using off-the-shelf machine learning tools to label the data. After labeling the data, we demonstrate how to extract a logical specification for each label. Finally, we showcase our technique on real world traffic data to learn classifiers/monitors for slow-downs and traffic jams.

Journal ArticleDOI
01 Sep 2018
TL;DR: The cyber components of autonomous vehicles are much more intelligent and complex than those of traditional vehicles, and interact more directly and closely with the physical environment.
Abstract: Cyber-physical systems (CPSs) are characterized by the seamless integration and close interaction of cyber components (eg, sensors, computation nodes, communication networks) and physical processes (eg, mechanical devices, physical environment, humans) The cyber components monitor, analyze, and control the physical processes, and react to their changes through feedback loops A classic example of CPSs is autonomous vehicles These vehicles collect information of the surrounding physical environment via heterogeneous sensors such as cameras, radar, and LIDAR; process and analyze the multi-modal information at real time with advanced computing devices such as GPUs, application-specific SoCs and multicore CPUs; automatically make planning and control decisions; and continuously actuate the corresponding mechanical components The cyber components of autonomous vehicles are much more intelligent and complex than those of traditional vehicles, and interact more directly and closely with the physical environment

Book ChapterDOI
01 Jan 2018
TL;DR: It is shown via design examples that an extension of the relation of contract refinement can be extended, via heterogeneous refinement and vertical contracts, to deal with hierarchies of models that present heterogeneous architectures as well as behaviors expressed by heterogeneous formalisms.
Abstract: We propose the notions of heterogeneous refinement and vertical contracts as additions for any contract framework to provide full methodological support for multi-view and multi-layer system design with heterogeneous models. We rethink the relation of contract refinement in the context of layered design and discuss how it can be extended, via heterogeneous refinement and vertical contracts, to deal with hierarchies of models that present heterogeneous architectures as well as behaviors expressed by heterogeneous formalisms. We then show via design examples that such an extension can, indeed, encompass a richer set of design refinement relations, including support for synthesis methods and optimized mappings of specifications into implementations.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: This paper describes an efficient technique to partition a specification, i.e., an LTL-based Assume/Guarantee contract, in a number of simpler sub-specifications which can be satisfied independently.
Abstract: Contract-Based Design is a methodology that allows for compositional design of complex systems. Given a contract representing a specification, it is possible to formally satisfy it by composing a number of simpler contracts. When these simpler contracts are chosen from a library of existing solutions, we talk about synthesis from contract libraries. There are techniques to automate the synthesis process, but they are computationally intensive, especially for complex specifications. In this paper, we describe an efficient technique to partition a specification, i.e., an LTL-based Assume/Guarantee contract, in a number of simpler sub-specifications which can be satisfied independently. Once all these smaller problems are solved, it is possible to safely merge their solutions to satisfy the original specification. We show the effectiveness of our technique in an industrial case study.

Journal ArticleDOI
TL;DR: Two algorithms to compute dominant strategies for continuous two-player zero-sum games based on the Counter-Example Guided Inductive Synthesis (CEGIS) paradigm are presented and it is shown that both algorithms are sound and terminate.

Journal ArticleDOI
TL;DR: In this paper, an adaptive wireless body area network (WBAN) scheme is presented that reconfigures the network by learning from body kinematics and biosignals, which can be exploited by reusing accelerometer data and scheduling packet transmissions at optimal times.
Abstract: The increasing penetration of wearable and implantable devices necessitates energy-efficient and robust ways of connecting them to each other and to the cloud. However, the wireless channel around the human body poses unique challenges such as a high and variable path-loss caused by frequent changes in the relative node positions as well as the surrounding environment. An adaptive wireless body area network (WBAN) scheme is presented that reconfigures the network by learning from body kinematics and biosignals. It has very low overhead since these signals are already captured by the WBAN sensor nodes to support their basic functionality. Periodic channel fluctuations in activities like walking can be exploited by reusing accelerometer data and scheduling packet transmissions at optimal times. Network states can be predicted based on changes in observed biosignals to reconfigure the network parameters in real time. A realistic body channel emulator that evaluates the path-loss for everyday human activities was developed to assess the efficacy of the proposed techniques. Simulation results show up to 41% improvement in packet delivery ratio (PDR) and up to 27% reduction in power consumption by intelligent scheduling at lower transmission power levels. Moreover, experimental results on a custom test-bed demonstrate an average PDR increase of 20% and 18% when using our adaptive EMG- and heart-rate-based transmission power control methods, respectively. The channel emulator and simulation code is made publicly available at this https URL.

Journal ArticleDOI
TL;DR: This paper presents the MDE process in use at Elettronica SpA (ELT) for the development of complex embedded systems integrating software and firmware, based on the adoption of SysML as the system-level modeling language and the use of Simulink for the refinement of selected subsystems.
Abstract: This paper presents the MDE process in use at Elettronica SpA (ELT) for the development of complex embedded systems integrating software and firmware. The process is based on the adoption of SysML as the system-level modeling language and the use of Simulink for the refinement of selected subsystems. Implementations are generated automatically for both the software (C++ code) and firmware parts, and communication adapters are automatically generated from SysML using a dedicated profile and open-source tools for modeling and code generation. The process starts from a SysML system model, developed according to the platform-based design paradigm, in which a functional model of the system is paired to a model of the execution platform. Subsystems are refined as Simulink models or hand-coded in C++. An implementation for Simulink models is generated as software code or firmware on FPGA. Based on the SysML system architecture specification, our framework drives the generation of Simulink models with consistent interfaces, allows the automatic generation of the communication code among all subsystems (including the HW–FW interface code). In addition, it provides for the automatic generation of connectors for system-level simulation and of test harnesses and mockups to ease the integration and verification stage. We provide early results on the time savings obtained by using these technologies in the development process.

Posted Content
TL;DR: An implementation for computing the measure of bounded LTL properties is provided and an implementation leverages SAT model counting and effects independence checks on subexpressions to compute the measure and metric compositionally.
Abstract: Author(s): Romeo, Inigo Incer; Lohstroh, Marten; Iannopollo, Antonio; Lee, Edward A; Sangiovanni-Vincentelli, Alberto | Abstract: We propose a measure and a metric on the sets of infinite traces generated by a set of atomic propositions. To compute these quantities, we first map properties to subsets of the real numbers and then take the Lebesgue measure of the resulting sets. We analyze how this measure is computed for Linear Temporal Logic (LTL) formulas. An implementation for computing the measure of bounded LTL properties is provided and explained. This implementation leverages SAT model counting and effects independence checks on subexpressions to compute the measure and metric compositionally.

Proceedings ArticleDOI
24 Jun 2018
TL;DR: This work addresses the design space exploration of wireless networks to jointly select topology and component sizing as an optimized mapping problem, and proposes an algorithm for efficient, compact encoding of feasible paths that can reduce by orders of magnitude the complexity of the optimization problem.
Abstract: We address the design space exploration of wireless networks to jointly select topology and component sizing. We formulate the exploration problem as an optimized mapping problem, where network elements are associated with components from pre-defined libraries to minimize a cost function under correctness guarantees. We express a rich set of system requirements as mixed integer linear constraints over path variables, denoting the presence or absence of paths between network nodes, and propose an algorithm for efficient, compact encoding of feasible paths that can reduce by orders of magnitude the complexity of the optimization problem. We incorporate our methods in a system-level design space exploration toolbox and evaluate their effectiveness on design examples from data collection and localization networks.

Posted Content
TL;DR: In this paper, the authors propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data, and then map a time series to a surface in the parameter space of the formula.
Abstract: Cyber-physical systems of today are generating large volumes of time-series data. As manual inspection of such data is not tractable, the need for learning methods to help discover logical structure in the data has increased. We propose a logic-based framework that allows domain-specific knowledge to be embedded into formulas in a parametric logical specification over time-series data. The key idea is to then map a time series to a surface in the parameter space of the formula. Given this mapping, we identify the Hausdorff distance between boundaries as a natural distance metric between two time-series data under the lens of the parametric specification. This enables embedding non-trivial domain-specific knowledge into the distance metric and then using off-the-shelf machine learning tools to label the data. After labeling the data, we demonstrate how to extract a logical specification for each label. Finally, we showcase our technique on real world traffic data to learn classifiers/monitors for slow-downs and traffic jams.

Posted Content
TL;DR: A context-specific validation framework to quantify the quality of a learned model based on a distance metric between the closed-loop actual system and the learned model and an active sampling scheme to compute a probabilistic upper bound on this distance in a sample-efficient manner is proposed.
Abstract: With an increasing use of data-driven models to control robotic systems, it has become important to develop a methodology for validating such models before they can be deployed to design a controller for the actual system. Specifically, it must be ensured that the controller designed for a learned model would perform as expected on the actual physical system. We propose a context-specific validation framework to quantify the quality of a learned model based on a distance measure between the closed-loop actual system and the learned model. We then propose an active sampling scheme to compute a probabilistic upper bound on this distance in a sample-efficient manner. The proposed framework validates the learned model against only those behaviors of the system that are relevant for the purpose for which we intend to use this model, and does not require any a priori knowledge of the system dynamics. Several simulations illustrate the practicality of the proposed framework for validating the models of real-world systems, and consequently, for controller synthesis.