scispace - formally typeset
Search or ask a question

Showing papers by "Moshe Y. Vardi published in 2021"


Proceedings ArticleDOI
03 Mar 2021
TL;DR: In this paper, a redesigned version of the Ethics and Accountability in Computer Science course at Rice University is presented, incorporating elements from philosophy of technology, critical media theory, and science and technology studies to encourage students to learn not only ethics in a shallow sense, examining abstract principles or values to determine right and wrong, but rather looking at a series of deeper questions more closely related to present issues of social justice and relying on a structural understanding of these problems to develop potential sociotechnical solutions.
Abstract: As ethical questions around the development of contemporary computer technologies have become an increasing point of public and political concern, computer science departments in universities around the world have placed renewed emphasis on tech ethics undergraduate classes as a means to educate students on the large-scale social implications of their actions. Committed to the idea that tech ethics is an essential part of the undergraduate computer science educational curriculum, at Rice University this year we piloted a redesigned version of our Ethics and Accountability in Computer Science class. This effort represents our first attempt at implementing a "deep" tech ethics approach to the course. Incorporating elements from philosophy of technology, critical media theory, and science and technology studies, we encouraged students to learn not only ethics in a "shallow" sense, examining abstract principles or values to determine right and wrong, but rather looking at a series of "deeper" questions more closely related to present issues of social justice and relying on a structural understanding of these problems to develop potential sociotechnical solutions. In this article, we report on our implementation of this redesigned approach. We describe in detail the rationale and strategy for implementing this approach, present key elements of the redesigned syllabus, and discuss final student reflections and course evaluations. To conclude, we examine course achievements, limitations, and lessons learned toward the future, particularly in regard to the number escalating social protests and issues involving Covid-19.

15 citations


Book ChapterDOI
05 Jul 2021
TL;DR: ProCount as mentioned in this paper uses a graded project-join tree to compute exact literal-weighted projected model counts of propositional formulas in conjunctive normal form and achieves state-of-the-art performance.
Abstract: Recent work in weighted model counting proposed a unifying framework for dynamic-programming algorithms. The core of this framework is a project-join tree: an execution plan that specifies how Boolean variables are eliminated. We adapt this framework to compute exact literal-weighted projected model counts of propositional formulas in conjunctive normal form. Our key conceptual contribution is to define gradedness on project-join trees, a novel condition requiring irrelevant variables to be eliminated before relevant variables. We prove that building graded project-join trees can be reduced to building standard project-join trees and that graded project-join trees can be used to compute projected model counts. The resulting tool ProCount is competitive with the state-of-the-art tools \(\texttt {D4}_{\texttt {P}}\), projMC, and reSSAT, achieving the shortest solving time on 131 benchmarks of 390 benchmarks solved by at least one tool, from 849 benchmarks in total.

12 citations


Proceedings ArticleDOI
30 May 2021
TL;DR: In this paper, a probabilistic planning method for collaborative human-robot manipulation tasks via probabilistically synthesizing an optimal policy is presented. But the goal of the planning is not to solve a 2-player deterministic game, with a limited number of human moves, but to enable the robot to take human actions into account.
Abstract: Robots have begun operating and collaborating with humans in industrial and social settings. This collaboration introduces challenges: the robot must plan while taking the human’s actions into account. In prior work, the problem was posed as a 2-player deterministic game, with a limited number of human moves. The limit on human moves is unintuitive, and in many settings determinism is undesirable. In this paper, we present a novel planning method for collaborative human-robot manipulation tasks via probabilistic synthesis. We introduce a probabilistic manipulation domain that captures the interaction by allowing for both robot and human actions with states that represent the configurations of the objects in the workspace. The task is specified using Linear Temporal Logic over finite traces (LTL f ). We then transform our manipulation domain into a Markov Decision Process (MDP) and synthesize an optimal policy to satisfy the specification on this MDP. We present two novel contributions: a formalization of probabilistic manipulation domains allowing us to apply existing techniques and a comparison of different encodings of these domains. Our framework is validated on a physical UR5 robot.

6 citations


Journal ArticleDOI
Moshe Y. Vardi1

4 citations


Book ChapterDOI
27 Mar 2021
TL;DR: In this paper, the authors argue that in many cases it may be better to replace the optimization problem with the satisficing problem, where instead of searching for optimal solutions, the goal is to search for solutions that adhere to a given threshold bound.
Abstract: Several problems in planning and reactive synthesis can be reduced to the analysis of two-player quantitative graph games. Optimization is one form of analysis. We argue that in many cases it may be better to replace the optimization problem with the satisficing problem, where instead of searching for optimal solutions, the goal is to search for solutions that adhere to a given threshold bound.

4 citations


Journal ArticleDOI
TL;DR: In this article, an extension of Strategy Logic for the imperfect-information setting, called SLii, is introduced and model-checking SLii restricted to hierarchical instances is shown to be decidable.
Abstract: We introduce an extension of Strategy Logic for the imperfect-information setting, called SLii and study its model-checking problem. As this logic naturally captures multi-player games with imperfect information, this problem is undecidable; but we introduce a syntactical class of “hierarchical instances” for which, intuitively, as one goes down the syntactic tree of the formula, strategy quantifications are concerned with finer observations of the model, and we prove that model-checking SLii restricted to hierarchical instances is decidable. This result, because it allows for complex patterns of existential and universal quantification on strategies, greatly generalises the decidability of distributed synthesis for systems with hierarchical information. It allows us to easily derive new decidability results concerning strategic problems under imperfect information such as the existence of Nash equilibria or rational synthesis.To establish this result, we go through an intermediary, “low-level” logic much more adapted to automata techniques. QCTLa is an extension of CTLa with second-order quantification over atomic propositions that has been used to study strategic logics with perfect information. We extend it to the imperfect information setting by parameterising second-order quantifiers with observations. The simple syntax of the resulting logic, QCTLaii, allows us to provide a conceptually neat reduction of SLii to QCTLaii that separates concerns, allowing one to forget about strategies and players and focus solely on second-order quantification. While the model-checking problem of QCTLaii is, in general, undecidable, we identify a syntactic fragment of hierarchical formulas and prove, using an automata-theoretic approach, that it is decidable.

3 citations



Book ChapterDOI
18 Oct 2021
TL;DR: In this article, the authors show that finite-horizon temporal synthesis offers enough algorithmic advantages to compensate for the loss in expressiveness of linear temporal logic (LTL) reasonings.
Abstract: Linear Temporal Logic (LTL), proposed by Pnueli in 1977 for reasoning about ongoing programs, was defined over infinite traces. The motivation for this was the desire to model arbitrarily long computations. While this approach has been highly successful in the context of model checking, it has been less successful in the context of reactive synthesis, due to the challenging algorithmics of infinite-horizon temporal synthesis. In this paper we show that focusing on finite-horizon temporal synthesis offers enough algorithmic advantages to compensate for the loss in expressiveness. In fact, finite-horizon reasonings is useful even in the context of infinite-horizon applications.

2 citations


Journal ArticleDOI
17 Sep 2021
TL;DR: In this paper, the Hopcroft and Brzozowski algorithms are compared in the context of semi-symbolic (explicit states, symbolic transition functions) automata representation.
Abstract: Temporal logic is often used to describe temporal properties in AI applications. The most popular language for doing so is Linear Temporal Logic (LTL). Recently, LTL on finite traces, LTLf, has been investigated in several contexts. In order to reason about LTLf, formulas are typically compiled into deterministic finite automata (DFA), as the intermediate semantic representation. Moreover, due to the fact that DFAs have canonical representation, efficient minimization algorithms can be applied to maximally reduce DFA size, helping to speed up subsequent computations. Here, we present a thorough investigation on two classical minimization algorithms, namely, the Hopcroft and Brzozowski algorithms. More specifically, we show how to apply these algorithms to semi-symbolic (explicit states, symbolic transition functions) automata representation. We then compare the two algorithms in the context of an LTLf-synthesis framework, starting from LTLf formulas. While earlier studies on comparing the two algorithms starting from randomly-generated automata concluded that neither algorithm dominates, our results suggest that starting from LTLf formulas, Hopcroft's algorithm is the best choice in the context of reactive synthesis. Deeper analysis explains why the supposed advantage of Brzozowski's algorithm does not materialize in practice.

2 citations


Proceedings Article
01 Jan 2021

2 citations


Book ChapterDOI
18 Jul 2021
TL;DR: In this paper, the authors present a reactive synthesis interpretation to the adapter design pattern, where an algorithm takes an Adaptee and a Target transducers, and the aim is to synthesize an adapter transducer that, when composed with the Adaptee, generates a behavior that is equivalent to the behavior of the target.
Abstract: In the Adapter Design Pattern, a programmer implements a Target interface by constructing an Adapter that accesses an existing Adaptee code. In this work, we present a reactive synthesis interpretation to the adapter design pattern, wherein an algorithm takes an Adaptee and a Target transducers, and the aim is to synthesize an Adapter transducer that, when composed with the Adaptee, generates a behavior that is equivalent to the behavior of the Target. One use of such an algorithm is to synthesize controllers that achieve similar goals on different hardware platforms. While this problem can be solved with existing synthesis algorithms, current state-of-the-art tools fail to scale. To cope with the computational complexity of the problem, we introduce a special form of specification format, called Separated GR(k), which can be solved with a scalable synthesis algorithm but still allows for a large set of realistic specifications. We solve the realizability and the synthesis problems for Separated GR(k), and show how to exploit the separated nature of our specification to construct better algorithms, in terms of time complexity, than known algorithms for GR(k) synthesis. We then describe a tool, called SGR(k), that we have implemented based on the above approach and show, by experimental evaluation, how our tool outperforms current state-of-the-art tools on various benchmarks and test-cases.

Posted Content
TL;DR: In this article, the authors present a reactive synthesis interpretation to the adapter design pattern, where an algorithm takes an adaptee and a target transducers, and the aim is to synthesize an adapter transducer that, when composed with the adaptee, generates a behavior that is equivalent to the behavior of the target.
Abstract: In the \emph{Adapter Design Pattern}, a programmer implements a \emph{Target} interface by constructing an \emph{Adapter} that accesses an existing \emph{Adaptee} code. In this work, we present a reactive synthesis interpretation to the adapter design pattern, wherein an algorithm takes an \emph{Adaptee} and a \emph{Target} transducers, and the aim is to synthesize an \emph{Adapter} transducer that, when composed with the {\em Adaptee}, generates a behavior that is equivalent to the behavior of the {\em Target}. One use of such an algorithm is to synthesize controllers that achieve similar goals on different hardware platforms. While this problem can be solved with existing synthesis algorithms, current state-of-the-art tools fail to scale. To cope with the computational complexity of the problem, we introduce a special form of specification format, called {\em Separated GR($k$)}, which can be solved with a scalable synthesis algorithm but still allows for a large set of realistic specifications. We solve the realizability and the synthesis problems for Separated GR($k$), and show how to exploit the separated nature of our specification to construct better algorithms, in terms of time complexity, than known algorithms for GR($k$) synthesis. We then describe a tool, called SGR($k$), that we have implemented based on the above approach and show, by experimental evaluation, how our tool outperforms current state-of-the-art tools on various benchmarks and test-cases.


Journal ArticleDOI
TL;DR: In this article, FourierSAT, an incomplete SAT solver based on Fourier Analysis (also known as Walsh-Fourier Transform) of Boolean functions is proposed.

Posted Content
TL;DR: In this paper, the authors presented improved congruence relations for Buchi automata that can be exponentially coarser than the classical one and gave asymptotically optimal relations of size O(n \log n) where n is the number of states of an automaton.
Abstract: We revisit here congruence relations for Buchi automata, which play a central role in the automata-based verification The size of the classical congruence relation is in $3^{\mathcal{O}(n^2)}$, where $n$ is the number of states of a given Buchi automaton $\mathcal{A}$ Here we present improved congruence relations that can be exponentially coarser than the classical one We further give asymptotically optimal congruence relations of size $2^{\mathcal{O}(n \log n)}$ Based on these optimal congruence relations, we obtain an optimal translation from Buchi automata to a family of deterministic finite automata (FDFW) that accepts the complementary language To the best of our knowledge, our construction is the first direct and optimal translation from Buchi automata to FDFWs

Posted Content
TL;DR: In this article, a combination of safety games and lasso testing in Buchi automata is used to check for and find pure strategy Nash equilibria in multiagent concurrent games with finite-horizon temporal goals.
Abstract: The problem of finding pure strategy Nash equilibria in multiagent concurrent games with finite-horizon temporal goals has received some recent attention. Earlier work solved this problem through the use of Rabin automata. In this work, we take advantage of the finite-horizon nature of the agents' goals and show that checking for and finding pure strategy Nash equilibria can be done using a combination of safety games and lasso testing in B\"uchi automata. To separate strategic reasoning from temporal reasoning, we model agents' goals by deterministic finite-word automata (DFAs), since finite-horizon logics such as LTL\textsubscript{f} and LDL\textsubscript{f} are reasoned about through conversion to equivalent DFAs. This allow us characterize the complexity of the problem as PSPACE complete.

Posted Content
TL;DR: In this article, the authors define and investigate the satisficing problem on a two-player graph game with the discounted-sum cost model, and show that while this problem can be solved using numerical methods just like the optimization problem, this approach does not render compelling benefits over optimization.
Abstract: Several problems in planning and reactive synthesis can be reduced to the analysis of two-player quantitative graph games. {\em Optimization} is one form of analysis. We argue that in many cases it may be better to replace the optimization problem with the {\em satisficing problem}, where instead of searching for optimal solutions, the goal is to search for solutions that adhere to a given threshold bound. This work defines and investigates the satisficing problem on a two-player graph game with the discounted-sum cost model. We show that while the satisficing problem can be solved using numerical methods just like the optimization problem, this approach does not render compelling benefits over optimization. When the discount factor is, however, an integer, we present another approach to satisficing, which is purely based on automata methods. We show that this approach is algorithmically more performant -- both theoretically and empirically -- and demonstrates the broader applicability of satisficing overoptimization.

Proceedings ArticleDOI
Moshe Y. Vardi1
01 Aug 2021
TL;DR: The year 2019 marked the 70th anniversary to Alan Turing's 1949 paper, "Checking a Large Routine" and the 50th anniversary of Tony Hoare's paper "An Axiomatic Basis for Compuer Programming" as mentioned in this paper.
Abstract: The year 2019 saw the 70th anniversary to Alan Turing’s 1949 paper, “Checking a Large Routine” and the 50th anniversary of Tony Hoare’s paper, “An Axiomatic Basis for Compuer Programming”. In the latter paper, Hoare stated: “When the correctness of a program, its compiler, and the hardware of the computer have all been established with mathematical certainty, it will be possible to place great reliance on the results of the program, and predict their properties with a confidence limited only by the reliability of the electronics.” In this talk, I will review the history of this vision, describing the obstacles, the controversies, and progress milestones. I will conclude with the description of both impressive progress and dramatic failures exhibited over the past few years.

Posted Content
TL;DR: In this article, the authors introduce Automata Linear Dynamic Logic on Finite Traces (ALDL_f) and show that satisfiability for ALDL-f formulas is in PSPACE.
Abstract: Temporal logics are widely used by the Formal Methods and AI communities. Linear Temporal Logic is a popular temporal logic and is valued for its ease of use as well as its balance between expressiveness and complexity. LTL is equivalent in expressiveness to Monadic First-Order Logic and satisfiability for LTL is PSPACE-complete. Linear Dynamic Logic (LDL), another temporal logic, is equivalent to Monadic Second-Order Logic, but its method of satisfiability checking cannot be applied to a nontrivial subset of LDL formulas. Here we introduce Automata Linear Dynamic Logic on Finite Traces (ALDL_f) and show that satisfiability for ALDL_f formulas is in PSPACE. A variant of Linear Dynamic Logic on Finite Traces (LDL_f), ALDL_f combines propositional logic with nondeterministic finite automata (NFA) to express temporal constraints. ALDL$_f$ is equivalent in expressiveness to Monadic Second-Order Logic. This is a gain in expressiveness over LTL at no cost.

Journal ArticleDOI
TL;DR: In this article, the use of icatibant, a bradykinin B2 receptor antagonist, to treat HAE attacks in pediatric patients with confirmed diagnosis of C1-INH-HAE was investigated.
Abstract: Hereditary angioedema (HAE) is rare disease characterized by recurrent, unpredictable, and debilitating attacks of subcutaneous/submucosal tissue swelling.1,2 The reported median age of onset of HAE due to C1 inhibitor deficiency/dysfunction (type 1/2; C1-INH-HAE) is 11-12 years.1,3 Treatment options for pediatric patients are limited, owing to low childhood diagnosis rates and low representation in investigative clinical trials.4 We present a multicenter, open-label, single-arm, phase 3 study (NCT01386658) investigating the use of icatibant, a bradykinin B2 receptor antagonist, to treat HAE attacks in pediatric patients with a confirmed diagnosis of C1-INH-HAE.5 In Part 1, patients (aged 2 to <18 years with confirmed diagnosis of C1-INH-HAE) received an icatibant injection in the presence or absence of an attack. Icatibant showed acceptable safety and tolerability, and the treatment response to the first icatibant injection (n = 22) was consistent with that observed in adults, with median time to onset of symptom relief (TOSR) of 1.0 h. The European Medicines Agency subsequently approved icatibant in 2017 for use in pediatric patients aged 2-17 years.