scispace - formally typeset
Search or ask a question
Author

Frédéric Besson

Bio: Frédéric Besson is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Compiler & Memory model. The author has an hindex of 16, co-authored 43 publications receiving 717 citations. Previous affiliations of Frédéric Besson include Centre national de la recherche scientifique & University of Rennes.

Papers
More filters
Journal ArticleDOI
TL;DR: A minimalistic, security-dedicated program model that only contains procedure call and run-time security checks is defined and an automatic method for verifying that an implementation using local security checks satisfies a global security property is proposed.
Abstract: A fundamental problem in software-based security is whether local security checks inserted into the code are sufficient to implement a global security property. This article introduces a formalism based on a linear-time temporal logic for specifying global security properties pertaining to the control flow of the program, and illustrates its expressive power with a number of existing properties. We define a minimalistic, security-dedicated program model that only contains procedure call and run-time security checks and propose an automatic method for verifying that an implementation using local security checks satisfies a global security property. We then show how to instantiate the framework to the security architecture of Java 2 based on stack inspection and privileged method calls.

87 citations

Book ChapterDOI
18 Apr 2006
TL;DR: This paper shows how to design efficient and lightweight reflexive tactics for a hierarchy of quantifier-free fragments of integer arithmetics that can cope with a wide class of linear and non-linear goals.
Abstract: When goals fall in decidable logic fragments, users of proof-assistants expect automation. However, despite the availability of decision procedures, automation does not come for free. The reason is that decision procedures do not generate proof terms. In this paper, we show how to design efficient and lightweight reflexive tactics for a hierarchy of quantifier-free fragments of integer arithmetics. The tactics can cope with a wide class of linear and non-linear goals. For each logic fragment, off-the-shelf algorithms generate certificates of infeasibility that are then validated by straightforward reflexive checkers proved correct inside the proof-assistant. This approach has been prototyped using the Coq proofassistant. Preliminary experiments are promising as the tactics run fast and produce small proof terms.

80 citations

Journal ArticleDOI
TL;DR: It is shown how certified abstract interpretation can be used to build a PCC architecture where the code producer can produce program certificates automatically and an interval analysis is designed that allows to generate certificates ascertaining that no array-out-of-bounds accesses will occur.

57 citations

Book ChapterDOI
22 Sep 1999
TL;DR: An analysis is designed which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system and is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program.
Abstract: We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.

53 citations

Proceedings ArticleDOI
26 Jun 2013
TL;DR: A novel approach to hybrid information flow monitoring is proposed by tracking the knowledge about secret variables using logical formulae and derive sufficient conditions on the static analysis for soundness and relative precision of hybrid monitors.
Abstract: Motivated by the problem of stateless web tracking (fingerprinting), we propose a novel approach to hybrid information flow monitoring by tracking the knowledge about secret variables using logical formulae. This knowledge representation helps to compare and improve precision of hybrid information flow monitors. We define a generic hybrid monitor parametrised by a static analysis and derive sufficient conditions on the static analysis for soundness and relative precision of hybrid monitors. We instantiate the generic monitor with a combined static constant and dependency analysis. Several other hybrid monitors including those based on well-known hybrid techniques for information flow control are formalised as instances of our generic hybrid monitor. These monitors are organised into a hierarchy that establishes their relative precision. The whole framework is accompanied by a formalisation of the theory in the Coq proof assistant.

38 citations


Cited by
More filters
Proceedings ArticleDOI
03 Nov 2014
TL;DR: The evaluation of the defensive techniques used by privacy-aware users finds that there exist subtle pitfalls --- such as failing to clear state on multiple browsers at once - in which a single lapse in judgement can shatter privacy defenses.
Abstract: We present the first large-scale studies of three advanced web tracking mechanisms - canvas fingerprinting, evercookies and use of "cookie syncing" in conjunction with evercookies. Canvas fingerprinting, a recently developed form of browser fingerprinting, has not previously been reported in the wild; our results show that over 5% of the top 100,000 websites employ it. We then present the first automated study of evercookies and respawning and the discovery of a new evercookie vector, IndexedDB. Turning to cookie syncing, we present novel techniques for detection and analysing ID flows and we quantify the amplification of privacy-intrusive tracking practices due to cookie syncing. Our evaluation of the defensive techniques used by privacy-aware users finds that there exist subtle pitfalls --- such as failing to clear state on multiple browsers at once - in which a single lapse in judgement can shatter privacy defenses. This suggests that even sophisticated users face great difficulties in evading tracking techniques.

542 citations

Journal ArticleDOI
TL;DR: It is found that many techniques from dependiveness evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks.
Abstract: The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques is now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based methods, such as Markov reward models, and detailed, discrete-event simulation. The use of quantitative techniques for security evaluation is much less common, and has typically taken the form of formal analysis of small parts of an overall design, or experimental red team-based approaches. Alone, neither of these approaches is fully satisfactory, and we argue that there is much to be gained through the development of a sound model-based methodology for quantifying the security one can expect from a particular design. In this work, we survey existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. We find that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks.

537 citations

Proceedings ArticleDOI
18 Nov 2002
TL;DR: A formal approach for finding bugs in security-relevant software and verifying their absence and experience suggests that this approach will be useful in finding a wide range of security vulnerabilities in large programs efficiently.
Abstract: We describe a formal approach for finding bugs in security-relevant software and verifying their absence. The idea is as follows: we identify rules of safe programming practice, encode them as safety properties, and verify whether these properties are obeyed. Because manual verification is too expensive, we have built a program analysis tool to automate this process. Our program analysis models the program to be verified as a pushdown automaton, represents the security property as a finite state automaton, and uses model checking techniques to identify whether any state violating the desired security goal is reachable in the program. The major advantages of this approach are that it is sound in verifying the absence of certain classes of vulnerabilities, that it is fully interprocedural, and that it is efficient and scalable. Experience suggests that this approach will be useful in finding a wide range of security vulnerabilities in large programs efficiently.

436 citations

Book ChapterDOI
08 Jul 2003
TL;DR: This method, based on Farkas’ Lemma, synthesizes linear invariants by extracting non-linear constraints on the coefficients of a target invariant from a program, which guarantees that the linear invariant is inductive.
Abstract: We present a new method for the generation of linear invariants which reduces the problem to a non-linear constraint solving problem. Our method, based on Farkas’ Lemma, synthesizes linear invariants by extracting non-linear constraints on the coefficients of a target invariant from a program. These constraints guarantee that the linear invariant is inductive. We then apply existing techniques, including specialized quantifier elimination methods over the reals, to solve these non-linear constraints. Our method has the advantage of being complete for inductive invariants. To our knowledge, this is the first sound and complete technique for generating inductive invariants of this form. We illustrate the practicality of our method on several examples, including cases in which traditional methods based on abstract interpretation with widening fail to generate sufficiently strong invariants.

352 citations

Journal ArticleDOI
TL;DR: Self-composition enables the use of standard techniques for information flow policy verification, such as program logics and model checking, that are suitable in Proof Carrying Code infrastructures and is illustrated in several settings, including different security policies such as non-interference and controlled forms of declassification and programming languages including an imperative language with parallel composition.
Abstract: Information flow policies are confidentiality policies that control information leakage through program execution. A common way to enforce secure information flow is through information flow type systems. Although type systems are compositional and usually enjoy decidable type checking or inference, their extensibility is very poor: type systems need to be redefined and proved sound for each new variation of security policy and programming language for which secure information flow verification is desired. In contrast, program logics offer a general mechanism for enforcing a variety of safety policies, and for this reason are favoured in Proof Carrying Code, which is a promising security architecture for mobile code. However, the encoding of information flow policies in program logics is not straightforward because they refer to a relation between two program executions. The purpose of this paper is to investigate logical formulations of secure information flow based on the idea of self-composition, which reduces the problem of secure information flow of a program P to a safety property for a program derived from P by composing P with a renaming of itself. Self-composition enables the use of standard techniques for information flow policy verification, such as program logics and model checking, that are suitable in Proof Carrying Code infrastructures. We illustrate the applicability of self-composition in several settings, including different security policies such as non-interference and controlled forms of declassification, and programming languages including an imperative language with parallel composition, a non-deterministic language and, finally, a language with shared mutable data structures.

325 citations