scispace - formally typeset
Search or ask a question
Author

Radhia Cousot

Bio: Radhia Cousot is an academic researcher from Centre national de la recherche scientifique. The author has contributed to research in topics: Abstract interpretation & Static analysis. The author has an hindex of 30, co-authored 68 publications receiving 13958 citations. Previous affiliations of Radhia Cousot include École Polytechnique & École Normale Supérieure.


Papers
More filters
Proceedings ArticleDOI
01 Jan 1977
TL;DR: In this paper, the abstract interpretation of programs is used to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations.
Abstract: A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe {(+), (-), (±)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).

6,829 citations

Proceedings ArticleDOI
01 Jan 1979
TL;DR: The systematic and correct design of program analysis frameworks with respect to a formal semantics is devoted to the main elements of the lattice theoretic approach to approximate semantic analysis of programs.
Abstract: Semantic analysis of programs is essential in optimizing compilers and program verification systems. It encompasses data flow analysis, data type determination, generation of approximate invariant assertions, etc. Several recent papers (among others Cousot & Cousot[77a], Graham & Wegman[76], Kam & Ullman[76], Kildall[73], Rosen[78], Tarjan[76], Wegbreit[75]) have introduced abstract approaches to program analysis which are tantamount to the use of a program analysis framework (A,t,a) where A is a lattice of (approximate) assertions, t is an (approximate) predicate transformer and a is an often implicit function specifying the meaning of the elements of A. This paper is devoted to the systematic and correct design of program analysis frameworks with respect to a formal semantics. Preliminary definitions are given in Section 2 concerning the merge over all paths and (least) fixpoint program-wide analysis methods. In Section 3 we briefly define the (forward and backward) deductive semantics of programs which is later used as a formal basis in order to prove the correctness of the approximate program analysis frameworks. Section 4 very shortly recall the main elements of the lattice theoretic approach to approximate semantic analysis of programs. The design of a space of approximate assertions A is studied in Section 5. We first justify the very reasonable assumption that A must be chosen such that the exact invariant assertions of any program must have an upper approximation in A and that the approximate analysis of any program must be performed using a deterministic process. These assumptions are shown to imply that A is a Moore family, that the approximation operator (wich defines the least upper approximation of any assertion) is an upper closure operator and that A is necessarily a complete lattice. We next show that the connection between a space of approximate assertions and a computer representation is naturally made using a pair of isotone adjoined functions. This type of connection between two complete lattices is related to Galois connections thus making available classical mathematical results. Additional results are proved, they hold when no two approximate assertions have the same meaning. In Section 6 we study and examplify various methods which can be used in order to define a space of approximate assertions or equivalently an approximation function. They include the characterization of the least Moore family containing an arbitrary set of assertions, the construction of the least closure operator greater than or equal to an arbitrary approximation function, the definition of closure operators by composition, the definition of a space of approximate assertions by means of a complete join congruence relation or by means of a family of principal ideals. Section 7 is dedicated to the design of the approximate predicate transformer induced by a space of approximate assertions. First we look for a reasonable definition of the correctness of approximate predicate transformers and show that a local correctness condition can be given which has to be verified for every type of elementary statement. This local correctness condition ensures that the (merge over all paths or fixpoint) global analysis of any program is correct. Since isotony is not required for approximate predicate transformers to be correct it is shown that non-isotone program analysis frameworks are manageable although it is later argued that the isotony hypothesis is natural. We next show that among all possible approximate predicate transformers which can be used with a given space of approximate assertions there exists a best one which provides the maximum information relative to a program-wide analysis method. The best approximate predicate transformer induced by a space of approximate assertions turns out to be isotone. Some interesting consequences of the existence of a best predicate transformer are examined. One is that we have in hand a formal specification of the programs which have to be written in order to implement a program analysis framework once a representation of the space of approximate assertions has been chosen. Examples are given, including ones where the semantics of programs is formalized using Hoare[78]'s sets of traces. In Section 8 we show that a hierarchy of approximate analyses can be defined according to the fineness of the approximations specified by a program analysis framework. Some elements of the hierarchy are shortly exhibited and related to the relevant literature. In Section 9 we consider global program analysis methods. The distinction between "distributive" and "non-distributive" program analysis frameworks is studied. It is shown that when the best approximate predicate transformer is considered the coincidence or not of the merge over all paths and least fixpoint global analyses of programs is a consequence of the choice of the space of approximate assertions. It is shown that the space of approximate assertions can always be refined so that the merge over all paths analysis of a program can be defined by means of a least fixpoint of isotone equations. Section 10 is devoted to the combination of program analysis frameworks. We study and examplify how to perform the "sum", "product" and "power" of program analysis frameworks. It is shown that combined analyses lead to more accurate information than the conjunction of the corresponding separate analyses but this can only be achieved by a new design of the approximate predicate transformer induced by the combined program analysis frameworks.

1,726 citations

Journal ArticleDOI
TL;DR: Interpretation Frameworks Patrick Cousot LIENS, Ecole Normale Superieur Superieure 45, rue d’Ulm 75230 Paris cedex 05 (France) cousot@dmi.ens.fr Radhia Cousot LIX, ecole Polytechnique 91128 Palaiseau cedEx ( France) radhia@polytechnique.fr
Abstract: Interpretation Frameworks Patrick Cousot LIENS, Ecole Normale Superieure 45, rue d’Ulm 75230 Paris cedex 05 (France) cousot@dmi.ens.fr Radhia Cousot LIX, Ecole Polytechnique 91128 Palaiseau cedex (France) radhia@polytechnique.fr

755 citations

Journal ArticleDOI
09 May 2003
TL;DR: It is shown that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms.
Abstract: We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software.The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization (Sect. 3 and 7), the symbolic manipulation of expressions to improve the precision of abstract transfer functions (Sect. 6.3), the octagon (Sect. 6.2.2), ellipsoid (Sect. 6.2.3), and decision tree (Sect. 6.2.4) abstract domains, all with sound handling of rounding errors in oating point computations, widening strategies (with thresholds: Sect. 7.1.2, delayed: Sect. 7.1.3) and the automatic determination of the parameters (parametrized packing: Sect. 7.2).

736 citations


Cited by
More filters
01 Sep 1996
TL;DR: Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation that often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs.
Abstract: Turing Lecture from the winners of the 2007 ACM A.M. Turing Award. In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model---representing, for example, a hardware or software design---satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem. The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms. The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future. ---Moshe Y. Vardi, Editor-in-Chief

7,392 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

Book
07 Jan 1999

4,478 citations

Book
01 May 2011
TL;DR: The SPIN Model Checker as mentioned in this paper is used for both teaching software verification techniques, and for validating large scale applications, and it has been estimated that up to three-quarters of the $400 billion spent annually to hire programmers in the United States is ultimately spent on debugging.
Abstract: The SPIN Model Checker is used for both teaching software verification techniques, and for validating large scale applications. The growing number of users has created a need for a more comprehensive user guide and a standard reference manual that describes the most recent version of the tool. This book fills that need. SPIN is used in over 40 countries. The offical SPIN web site, spinroot.com receives between 2500 and 3000 hits per day. It has been estimated that up to three-quarters of the $400 billion spent annually to hire programmers in the United States is ultimately spent on debugging

2,530 citations

Journal ArticleDOI
TL;DR: A general framework for the formal specification and algorithmic analysis of hybrid systems is presented, which considers symbolic model-checking and minimization procedures that are based on the reachability analysis of an infinite state space.

2,091 citations