Author
Irene Finocchi
Other affiliations: University of Rome Tor Vergata
Bio: Irene Finocchi is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Data structure & Directed graph. The author has an hindex of 24, co-authored 90 publications receiving 1740 citations. Previous affiliations of Irene Finocchi include University of Rome Tor Vergata.
Papers published on a yearly basis
Papers
More filters
TL;DR: A survey of the main challenges, challenges, and solutions for symbolic execution can be found in this paper, where the authors provide an overview of main ideas, challenges and solutions developed in the area.
Abstract: Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program’s authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the past four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience.
271 citations
Posted Content•
TL;DR: The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in Symbolic execution, distilling them for a broad audience.
Abstract: Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: this http URL
134 citations
TL;DR: A resilient dictionary is presented, implementing search, insert, and delete operations, and it is shown that any resilient comparison-based dictionary must take Ω(log n + Δ) expected time per search.
Abstract: We address the problem of designing data structures in the presence of faults that may arbitrarily corrupt memory locations. More precisely, we assume that an adaptive adversary can arbitrarily overwrite the content of up to δ memory locations, that corrupted locations cannot be detected, and that only O(1) memory locations are safe. In this framework, we call a data structure resilient if it is able to operate correctly (at least) on the set of uncorrupted values. We present a resilient dictionary, implementing search, insert, and delete operations. Our dictionary has O(log n + δ) expected amortized time per operation, and O(n) space complexity, where n denotes the current number of keys in the dictionary. We also describe a deterministic resilient dictionary, with the same amortized cost per operation over a sequence of at least δϵ operations, where ϵ > 0 is an arbitrary constant. Finally, we show that any resilient comparison-based dictionary must take Ω(log n + δ) expected time per search. Our results are achieved by means of simple, new techniques which might be of independent interest for the design of other resilient algorithms.
124 citations
11 Jun 2012
TL;DR: A building block technique and a toolkit towards automatic discovery of workload-dependent performance bottlenecks that other profilers may fail to detect and can provide useful characterizations of the workload and behavior of individual routines in the context of mainstream applications are presented.
Abstract: In this paper we present a profiling methodology and toolkit for helping developers discover hidden asymptotic inefficiencies in the code. From one or more runs of a program, our profiler automatically measures how the performance of individual routines scales as a function of the input size, yielding clues to their growth rate. The output of the profiler is, for each executed routine of the program, a set of tuples that aggregate performance costs by input size. The collected profiles can be used to produce performance plots and derive trend functions by statistical curve fitting or bounding techniques. A key feature of our method is the ability to automatically measure the size of the input given to a generic code fragment: to this aim, we propose an effective metric for estimating the input size of a routine and show how to compute it efficiently. We discuss several case studies, showing that our approach can reveal asymptotic bottlenecks that other profilers may fail to detect and characterize the workload and behavior of individual routines in the context of real applications. To prove the feasibility of our techniques, we implemented a Valgrind tool called aprof and performed an extensive experimental evaluation on the SPEC CPU2006 benchmarks. Our experiments show that aprof delivers comparable performance to other prominent Valgrind tools, and can generate informative plots even from single runs on typical workloads for most algorithmically-critical routines.
77 citations
TL;DR: This paper shows that the "streaming with sorting" model by Aggarwal et al. can yield interesting results even without using sorting at all: by just using intermediate temporary streams, this paper provides the first effective space-passes tradeoffs for natural graph problems.
Abstract: Data stream processing has recently received increasing attention as a computational paradigm for dealing with massive data sets. Surprisingly, no algorithm with both sublinear space and passes is known for natural graph problems in classical read-only streaming. Motivated by technological factors of modern storage systems, some authors have recently started to investigate the computational power of less restrictive models where writing streams is allowed. In this article, we show that the use of intermediate temporary streams is powerful enough to provide effective space-passes tradeoffs for natural graph problems. In particular, for any space restriction of s bits, we show that single-source shortest paths in directed graphs with small positive integer edge weights can be solved in O((n log3/2n)/√s) passes. The result can be generalized to deal with multiple sources within the same bounds. This is the first known streaming algorithm for shortest paths in directed graphs. For undirected connectivity, we devise an O((n log n)/s) passes algorithm. Both problems require Ω(n/s) passes under the restrictions we consider. We also show that the model where intermediate temporary streams are allowed can be strictly more powerful than classical streaming for some problems, while maintaining all of its hardness for others.
74 citations
Cited by
More filters
Book•
01 Jan 2006TL;DR: The author discusses the history and present situation of operating systems, as well as some of the techniques used to design and implement these systems.
Abstract: Table of Contents CHAPTER 1 INTRODUCTION 1.1 WHAT IS AN OPERATING SYSTEM? 1.2 HISTORY OF OPERATING SYSTEMS 1.3 OPERATING SYSTEM CONCEPTS 1.4 SYSTEM CALLS 1.5 OPERATING SYSTEM STRUCTURE 1.6 OUTLINE OF THE REST OF THIS BOOK 1.7 SUMMARY CHAPTER 2 PROCESSES 2.1 INTRODUCTION TO PROCESSES 2.2 INTERPROCESS COMMUNICATION 2.3 CLASSICAL IPC PROBLEMS 2.4 SCHEDULING 2.5 OVERVIEW OF PROCESSES IN MINIX 3 2.6 IMPLEMENTATION OF PROCESSES IN MINIX 3 2.7 THE SYSTEM TASK IN MINIX 3 2.8 THE CLOCK TASK IN MINIX 3 2.9 SUMMARY CHAPTER 3 INPUT/OUTPUT 3.1 PRINCIPLES OF I/O HARDWARE 3.2 PRINCIPLES OF I/O SOFTWARE 3.3 DEADLOCKS 3.4 OVERVIEW OF I/O IN MINIX 3 3.5 BLOCK DEVICES IN MINIX 3 3.6 RAM DISKS 3.7 DISKS 3.8 TERMINALS 3.9 SUMMARY CHAPTER 4 MEMORY MANAGEMENT 4.1 BASIC MEMORY MANAGEMENT 4.2 SWAPPING 4.3 VIRTUAL MEMORY 4.4 PAGE REPLACEMENT ALGORITHMS 4.5 DESIGN ISSUES FOR PAGING SYSTEMS 4.6 SEGMENTATION 4.7 OVERVIEW OF THE MINIX 3 PROCESS MANAGER 4.8 IMPLEMENTATION OF THE MINIX 3 PROCESS MANAGER 4.9 SUMMARY CHAPTER 5 FILE SYSTEMS 5.1 FILES 5.2 DIRECTORIES 5.3 FILE SYSTEM IMPLEMENTATION 5.4 SECURITY 5.5 PROTECTION MECHANISMS 5.6 OVERVIEW OF THE MINIX 3 FILE SYSTEM 5.7 IMPLEMENTATION OF THE MINIX 3 FILE SYSTEM 5.8 SUMMARY CHAPTER 6 READING LIST AND BIBLIOGRAPHY 6.1 SUGGESTIONS FOR FURTHER READING 6.2 ALPHABETICAL BIBLIOGRAPHY APPENDIX A - INSTALLING MINIX 3 APPENDIX B - MINIX 3 SOURCE CODE LISTING APPENDIX C - INDEX TO FILES INDEX
572 citations
01 Oct 2011
TL;DR: relevant features of Soot are described, its development process is summarized, and useful features for future program analysis frameworks are discussed.
Abstract: Soot is a successful framework for experimenting with compiler and software engineering techniques for Java programs. Researchers from around the world have implemented a wide range of research tools which build on Soot, and Soot has been widely used by students for both courses and thesis research. In this paper, we describe relevant features of Soot, summarize its development process, and discuss useful features for future program analysis frameworks.
334 citations
24 Aug 2008
TL;DR: This is the first paper that addresses the problem of local triangle counting with a focus on the efficiency issues arising in massive graphs and proposes two approximation algorithms, which are based on the idea of min-wise independent permutations.
Abstract: In this paper we study the problem of local triangle counting in large graphs. Namely, given a large graph G = (V;E) we want to estimate as accurately as possible the number of triangles incident to every node υ ∈ V in the graph. The problem of computing the global number of triangles in a graph has been considered before, but to our knowledge this is the first paper that addresses the problem of local triangle counting with a focus on the efficiency issues arising in massive graphs. The distribution of the local number of triangles and the related local clustering coefficient can be used in many interesting applications. For example, we show that the measures we compute can help to detect the presence of spamming activity in large-scale Web graphs, as well as to provide useful features to assess content quality in social networks.For computing the local number of triangles we propose two approximation algorithms, which are based on the idea of min-wise independent permutations (Broder et al. 1998). Our algorithms operate in a semi-streaming fashion, using O(jV j) space in main memory and performing O(log jV j) sequential scans over the edges of the graph. The first algorithm we describe in this paper also uses O(jEj) space in external memory during computation, while the second algorithm uses only main memory. We present the theoretical analysis as well as experimental results in massive graphs demonstrating the practical efficiency of our approach.
317 citations
TL;DR: This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables.
Abstract: Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.
316 citations