scispace - formally typeset
Search or ask a question
Author

Emmanuele Zambon

Bio: Emmanuele Zambon is an academic researcher. The author has contributed to research in topics: Information technology & Information technology management. The author has an hindex of 1, co-authored 1 publications receiving 65 citations.

Papers
More filters
DissertationDOI
20 Jan 2011
TL;DR: A graph-based framework for modelling the availability dependencies of the components of an IT infrastructure is proposed and techniques based on this framework are developed to support availability planning.
Abstract: The availability of an organisation’s IT infrastructure is of vital importance for supporting business activities. IT outages are a cause of competitive liability, chipping away at a company financial performance and reputation. To achieve the maximum possible IT availability within the available budget, organisations need to carry out a set of analysis activities to prioritise efforts and take decisions based on the business needs. This set of analysis activities is called IT availability planning. Most (large) organisations address IT availability planning from one or more of the three main angles: information risk management, business continuity and service level management. Information risk management consists of identifying, analysing, evaluating and mitigating the risks that can affect the information processed by an organisation and the information-processing (IT) systems. Business continuity consists of creating a logistic plan, called business continuity plan, which contains the procedures and all the useful information needed to recover an organisations’ critical processes after major disruption. Service level management mainly consists of organising, documenting and ensuring a certain quality level (e.g. the availability level) for the services offered by IT systems to the business units of an organisation. There exist several standard documents that provide the guidelines to set up the processes of risk, business continuity and service level management. However, to be as generally applicable as possible, these standards do not include implementation details. Consequently, to do IT availability planning each organisation needs to develop the concrete techniques that suit its needs. To be of practical use, these techniques must be accurate enough to deal with the increasing complexity of IT infrastructures, but remain feasible within the budget available to organisations. As we argue in this dissertation, basic approaches currently adopted by organisations are feasible but often lack of accuracy. In this thesis we propose a graph-based framework for modelling the availability dependencies of the components of an IT infrastructure and we develop techniques based on this framework to support availability planning.

65 citations


Cited by
More filters
11 May 2011
TL;DR: This work developed a prototype Dreams engine to test the distributed protocol, using an actor library for the Scala language and statically discover regions of the coordination layer that can execute independently, thus achieving a truly decoupled execution of connectors.
Abstract: This work contributes to the field of coordination, in particular to Reo, by improving existing approaches to execute synchronisation models in three major ways. First, this work supports decoupled execution and lightweight reconfiguration. We developed a prototype Dreams engine to test our distributed protocol, using an actor library for the Scala language. Reconfiguration of a small part of the system is independent of the execution or behaviour of unrelated parts of the same system. Second, Dreams outperforms previous Reo engines by using constraint satisfaction techniques. In each round of the execution of the Dreams framework, descriptions of the behaviour of all building blocks are combined and a coordination pattern for the current round is chosen using constraint satisfaction techniques. This approach requiring less time than previous attempts that collect all patterns before selecting one. Third, our work improves scalability by identifying synchronous regions. We statically discover regions of the coordination layer that can execute independently, thus achieving a truly decoupled execution of connectors. Consequently, the constraint problem representing the behaviour at each round is smaller and more easily solved.

98 citations

Journal Article
TL;DR: This thesis addresses the foundational aspects of formal methods for applications in security and in particular in anonymity, and develops frameworks for the specification of anonymity properties and proposes algorithms for their verification.
Abstract: As we dive into the digital era, there is growing concern about the amount of personal digital information that is being gathered about us. Websites often track people's browsing behavior, health care insurers gather medical data, and many smartphones and navigation systems store or trans- mit information that makes it possible to track the physical location of their users at any time. Hence, anonymity, and privacy in general, are in- creasingly at stake. Anonymity protocols counter this concern by offering anonymous communication over the Internet. To ensure the correctness of such protocols, which are often extremely complex, a rigorous framework is needed in which anonymity properties can be expressed, analyzed, and ulti- mately verified. Formal methods provide a set of mathematical techniques that allow us to rigorously specify and verify anonymity properties. This thesis addresses the foundational aspects of formal methods for applications in security and in particular in anonymity. More concretely, we develop frameworks for the specification of anonymity properties and propose algorithms for their verification. Since in practice anonymity pro- tocols always leak some information, we focus on quantitative properties which capture the amount of information leaked by a protocol. We start our research on anonymity from its very foundations, namely conditional probabilities - these are the key ingredient of most quantitative anonymity properties. In Chapter 2 we present cpCTL, the first temporal logic making it possible to specify conditional probabilities. In addition, we present an algorithm to verify cpCTL formulas in a model-checking fashion. This logic, together with the model-checker, allows us to specify and verify quantitative anonymity properties over complex systems where probabilistic and nondeterministic behavior may coexist. We then turn our attention to more practical grounds: the constructions of algorithms to compute information leakage. More precisely, in Chapter 3 we present polynomial algorithms to compute the (information-theoretic) leakage of several kinds of fully probabilistic protocols (i.e. protocols with- out nondeterministic behavior). The techniques presented in this chapter are the first ones enabling the computation of (information-theoretic) leak- age in interactive protocols. In Chapter 4 we attack a well known problem in distributed anonymity protocols, namely full-information scheduling. To overcome this problem, we propose an alternative definition of schedulers together with several new definitions of anonymity (varying according to the attacker's power), and revise the famous definition of strong-anonymity from the literature. Furthermore, we provide a technique to verify that a distributed protocol satisfies some of the proposed definitions. In Chapter 5 we provide (counterexample-based) techniques to debug complex systems, allowing for the detection of flaws in security protocols. Finally, in Chapter 6 we briefly discuss extensions to the frameworks and techniques proposed in Chapters 3 and 4.

71 citations

Dissertation
01 Jan 2015
TL;DR: Memory trees are a middle ground, and therefore suitable to describe both the low-level and high-level aspects of the C memory as discussed by the authors, and are used in the external interface of the memory model and throughout the operational semantics.
Abstract: values hide internal details of the memory such as permissions, padding and object representations. They are therefore used in the external interface of the memory model and throughout the operational semantics. Memory trees, abstract values and bits with permissions can be converted into each other. These conversions are used to define operations internal to the memory model. However, none of these conversions are bijective because different information is materialized in these three data types: Abstract values Memory trees Bits with permissions Permissions X X Padding always E X Variants of union X X Mathematical values Xvalues Memory trees Bits with permissions Permissions X X Padding always E X Variants of union X X Mathematical values X This table indicates that abstract values and sequences of bits are complementary. Memory trees are a middle ground, and therefore suitable to describe both the lowlevel and high-level aspects of the C memory.

69 citations

DissertationDOI
15 Sep 2011
TL;DR: This thesis introduces the EventReactor language as an implementation of Event Composition Model, which offers a set of novel linguistic abstractions, called events, event modules, reactors, reactor chains, event composition language and event constraint language.
Abstract: Runtime enforcement techniques are introduced in the literature to cope with the failures that occur while software is being executed in its target environment. Runtime enforcement techniques contain various concepts that are composed with each other so that the overall functionality of the techniques is achieved. By the term concept we mean a fundamental abstraction or definition that exists in most runtime enforcement techniques. Since the development of runtime enforcement techniques can be complex, runtime enforcement frameworks are proposed to ease the development process. These frameworks offer specification languages to represent the concepts of interest. To facilitate a natural representation of the concepts, this thesis introduces a computation model termed as Event Composition Model, which offers a set of novel linguistic abstractions, called events, event modules, reactors, reactor chains, event composition language and event constraint language. Events represent changes in the states of interest. Event modules are means to group events, have input-output interfaces, and implementations. Reactors are the implementations of event modules. Reactor chains are groups of related reactors that process events in a sequence. The event composition language facilitates selecting the events of interest; and the event constraint language facilitates defining constraints among reactors or event modules. The thesis introduces the EventReactor language as an implementation of Event Composition Model. The language is open-ended for new sorts of events and reactor types. This helps to specify new sorts of concepts. It makes use of the Prolog language as its event composition language. Reactors and reactor chains are parameterizable, and are defined separately from event modules. This increases the reusability of event modules and their implementations. In the EventReactor language, the concepts of interest are represented independently from any programming language, and the compiler of EventReactor supports software developed in Java, C and .Net languages. For distributed software that makes use of Java-RMI as the middleware, the EventReactor language supports distribution-transparent representations of the concepts. There are two basic ways in utilizing the EventReactor language: a) as an underlying language for the specification languages of runtime enforcement frameworks; b) as an implementation language for runtime enforcement techniques.

69 citations

Dissertation
15 Dec 2011
TL;DR: This thesis investigates ambiguity detection with the aim of checking grammars for programming languages, and evaluates existing methods with a set of criteria for practical usability, and presents various improvements to ambiguity detection in the areas of accuracy, performance and report quality.
Abstract: Context-free grammars are the most suitable and most widely used method for describing the syntax of programming languages. They can be used to generate parsers, which transform a piece of source code into a tree-shaped representation of the code's syntactic structure. These parse trees can then be used for further processing or analysis of the source text. In this sense, grammars form the basis of many engineering and reverse engineering applications, like compilers, interpreters and tools for software analysis and transformation. Unfortunately, context-free grammars have the undesirable property that they can be ambiguous, which can seriously hamper their applicability. A grammar is ambiguous if at least one sentence in its language has more than one valid parse tree. Since the parse tree of a sentence is often used to infer its semantics, an ambiguous sentence can have multiple meanings. For programming languages this is almost always unintended. Ambiguity can therefore be seen as a grammar bug. A specific category of context-free grammars that is particularly sensitive to ambiguity are character-level grammars, which are used to generate scannerless parsers. Unlike traditional token-based grammars, character-level grammars include the full lexical definition of their language. This has the advantage that a language can be specified in a single formalism, and that no separate lexer or scanner phase is necessary in the parser. However, the absence of a scanner does require some additional lexical disambiguation. Character-level grammars can therefore be annotated with special disambiguation declarations to specify which parse trees to discard in case of ambiguity. Unfortunately, it is very hard to determine whether all ambiguities have been covered. The task of searching for ambiguities in a grammar is very complex and time consuming, and is therefore best automated. Since the invention of context-free grammars, several ambiguity detection methods have been developed to this end. However, the ambiguity problem for context-free grammars is undecidable in general, so the perfect detection method cannot exist. This implies a trade-off between accuracy and termination. Methods that apply exhaustive searching are able to correctly find all ambiguities, but they might never terminate. On the other hand, approximative search techniques do always produce an ambiguity report, but these might contain false positives or false negatives. Nevertheless, the fact that every method has flaws does not mean that ambiguity detection cannot be useful in practice. This thesis investigates ambiguity detection with the aim of checking grammars for programming languages. The challenge is to improve upon the state-of-the-art, by finding accurate enough methods that scale to realistic grammars. First we evaluate existing methods with a set of criteria for practical usability. Then we present various improvements to ambiguity detection in the areas of accuracy, performance and report quality. The main contributions of this thesis are two novel techniques. The first is an ambi- guity detection method that applies both exhaustive and approximative searching, called AMBIDEXTER. The key ingredient of AMBIDEXTER is a grammar filtering technique that can remove harmless production rules from a grammar. A production rule is harmless if it does not contribute to any ambiguity in the grammar. Any found harmless rules can therefore safely be removed. This results in a smaller grammar that still contains the same ambiguities as the original one. However, it can now be searched with exhaustive techniques in less time. The grammar filtering technique is formally proven correct, and experimentally validated. A prototype implementation is applied to a series of programming language grammars, and the performance of exhaustive detection methods are measured before and after filtering. The results show that a small investment in filtering time can substantially reduce the run-time of exhaustive searching, sometimes with several orders of magnitude. After this evaluation on token-based grammars, the grammar filtering technique is extended for use with character-level grammars. The extensions deal with the increased complexity of these grammars, as well as their disambiguation declarations. This enables the detection of productions that are harmless due to disambiguation. The extentions are experimentally validated on another set of programming language grammars from practice, with similar results as before. Measurements show that, even though character-level grammars are more expensive to filter, the investment is still very worthwhile. Exhaustive search times were again reduced substantially. The second main contribution of this thesis is DR. AMBIGUITY, an expert system to help grammar developers to understand and solve found ambiguities. If applied to an ambiguous sentence, DR. AMBIGUITY analyzes the causes of the ambiguity and proposes a number of applicable solutions. A prototype implementation is presented and evaluated with a mature Java grammar. After removing disambiguation declarations from the grammar we analyze sentences that have become ambiguous by this removal. The results show that in all cases the removed filter is proposed by DR. AMBIGUITY as a possible cure for the ambiguity. Concluding, this thesis improves ambiguity detection with two novel methods. The first is the ambiguity detection method AMBIDEXTER that applies grammar filtering to substantially speed up exhaustive searching. The second is the expert system DR. AMBIGUITY that automatically analyzes found ambiguities and proposes applicable cures. The results obtained with both methods show that automatic ambiguity detection is now ready for realistic programming language grammars.

68 citations