scispace - formally typeset
Search or ask a question
Author

Daniel Dunbar

Other affiliations: University of Virginia
Bio: Daniel Dunbar is an academic researcher from Stanford University. The author has contributed to research in topics: Colors of noise & Sampling distribution. The author has an hindex of 3, co-authored 4 publications receiving 2925 citations. Previous affiliations of Daniel Dunbar include University of Virginia.

Papers
More filters
Proceedings ArticleDOI
08 Dec 2008
TL;DR: A new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs, and significantly beat the coverage of the developers' own hand-written test suite is presented.
Abstract: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them.We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.

2,896 citations

Journal ArticleDOI
01 Jul 2006
TL;DR: A new method for sampling by dart-throwing in O(N log N) time is presented and a novel and efficient variation for generating Poisson-disk distributions in O (N) time and space is introduced.
Abstract: Sampling distributions with blue noise characteristics are widely used in computer graphics. Although Poisson-disk distributions are known to have excellent blue noise characteristics, they are generally regarded as too computationally expensive to generate in real time. We present a new method for sampling by dart-throwing in O(N log N) time and introduce a novel and efficient variation for generating Poisson-disk distributions in O(N) time and space.

243 citations

Proceedings ArticleDOI
09 Jul 2007
TL;DR: Software testing is well-recognized as a crucial part of the modern software development process, however, manual testing is labor intensive and often fails to produce impressive coverage results.
Abstract: Software testing is well-recognized as a crucial part of the modern software development process. However, manual testing is labor intensive and often fails to produce impressive coverage results. Random testing is easily applied but gets poor coverage on complex code. Recent work has attacked these problems using symbolic execution to automatically generate high-coverage test inputs [3, 6, 4, 8, 5, 2]. At a high-level these tools use variations on the following idea. Instead of running code on manually or randomly constructed input, they run it on symbolic input initially allowed to be “anything.” They substitute program variables with symbolic values and replaces concrete program operations with ones that manipulate symbolic values. When program execution branches based on a symbolic value the system (conceptually) follows both branches at once, maintaining a set of constraints called the path condition which must hold on execution of that path. When a path terminates or hits a bug, a test case can be generated by solving the current path condition to find concrete values. Assuming deterministic code, feeding this concrete input to an uninstrumented version of the checked code will cause it to follow the same path and hit the same bug. However, these tools (and all dynamic tools) assume you can run the code you want to check in the first place. In the easiest case, testing just runs an entire application. This requires no special work: just compile the program and execute it. However, the exponential number of code paths in a

73 citations

Proceedings ArticleDOI
25 Apr 2022
TL;DR: The initial results of the approach show the successful extraction of structural information from requirement text, which was validated by comparing the results to human interpretations for small and public sample sets and will be integrated and are part of a novel requirement complexity assessment framework.
Abstract: The automatic extraction of structure from text can be difficult for machines. Yet, the elicitation of this information can provide many benefits and opportunities for various applications. Such benefits have been identified amongst others for the area of Requirements Engineering. By assessing the Natural Language Processing for Requirement Engineering status quo and literature, a necessity for an automatic and universal approach to elicit structure from requirement and specification documents was identified. This paper outlines the first steps and results towards a modularized approach that splits the core algorithm from the text corpus as an input and underlying rule/knowledge base. This separation of functions allows for individual modification of the included parts and eases or potentially removes restrictions as well as limitations, such as input rules or the necessity for human supervision. Furthermore, contextual information and links via ontology inference can be considered that are not explicit on a textual level. The initial results of the approach show the successful extraction of structural information from requirement text, which was validated by comparing the results to human interpretations for small and public sample sets. In addition, the contextual consideration and inference via ontologies is described conceptually. At the current stage, limitations still exist regarding scalability and handling of text ambiguities, but solutions for these caveats have been developed and are being tested. Overall, the approach and results presented will be integrated and are part of a novel requirement complexity assessment framework.

4 citations

01 Jan 2006
TL;DR: A new data structure is presented that alllows sampling by dart-throwing in O(N logN) time and it is shown how a novel and efficient variation on this algorithm can be used to generate Poissondisk distributions in O-time and space.
Abstract: Sampling distributions with blue noise characteristics are widely used in computer graphics. Although Poisson-disk distributions are known to have excellent blue noise characteristics, they are generally regarded as too computationally expensive to generate in real time. We present a new data structure that alllows sampling by dart-throwing in O(N logN) time. We also show how a novel and efficient variation on this algorithm can be used to generate Poissondisk distributions in O(N) time and space.

4 citations


Cited by
More filters
Proceedings ArticleDOI
10 Apr 2011
TL;DR: The design and implementation of CloneCloud is presented, a system that automatically transforms mobile applications to benefit from the cloud that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud.
Abstract: Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.

2,054 citations

01 Jan 2009
TL;DR: This paper presents a meta-modelling framework for modeling and testing the robustness of the modeled systems and some of the techniques used in this framework have been developed and tested in the field.
Abstract: ing WS1S Systems to Verify Parameterized Networks . . . . . . . . . . . . 188 Kai Baukus, Saddek Bensalem, Yassine Lakhnech and Karsten Stahl FMona: A Tool for Expressing Validation Techniques over Infinite State Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 J.-P. Bodeveix and M. Filali Transitive Closures of Regular Relations for Verifying Infinite-State Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Bengt Jonsson and Marcus Nilsson Diagnostic and Test Generation Using Static Analysis to Improve Automatic Test Generation . . . . . . . . . . . . . 235 Marius Bozga, Jean-Claude Fernandez and Lucian Ghirvu Efficient Diagnostic Generation for Boolean Equation Systems . . . . . . . . . . . . 251 Radu Mateescu Efficient Model-Checking Compositional State Space Generation with Partial Order Reductions for Asynchronous Communicating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Jean-Pierre Krimm and Laurent Mounier Checking for CFFD-Preorder with Tester Processes . . . . . . . . . . . . . . . . . . . . . . . 283 Juhana Helovuo and Antti Valmari Fair Bisimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Thomas A. Henzinger and Sriram K. Rajamani Integrating Low Level Symmetries into Reachability Analysis . . . . . . . . . . . . . 315 Karsten Schmidt Model-Checking Tools Model Checking Support for the ASM High-Level Language . . . . . . . . . . . . . . 331 Giuseppe Del Castillo and Kirsten Winter Table of

1,687 citations

Book ChapterDOI
09 Apr 2008
TL;DR: Pex automatically produces a small test suite with high code coverage for a .NET program by performing a systematic program analysis using dynamic symbolic execution, similar to path-bounded model-checking, to determine test inputs for Parameterized Unit Tests.
Abstract: Pex automatically produces a small test suite with high code coverage for a .NET program. To this end, Pex performs a systematic program analysis (using dynamic symbolic execution, similar to path-bounded model-checking) to determine test inputs for Parameterized Unit Tests. Pex learns the program behavior by monitoring execution traces. Pex uses a constraint solver to produce new test inputs which exercise different program behavior. The result is an automatically generated small test suite which often achieves high code coverage. In one case study, we applied Pex to a core component of the .NET runtime which had already been extensively tested over several years. Pex found errors, including a serious issue.

900 citations

Proceedings ArticleDOI
16 May 2010
TL;DR: The algorithms for dynamic taint analysis and forward symbolic execution are described as extensions to the run-time semantics of a general language to highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context.
Abstract: Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context.

787 citations

Proceedings ArticleDOI
01 Jan 2016
TL;DR: Driller is presented, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs and mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing.
Abstract: Memory corruption vulnerabilities are an everpresent risk in software, which attackers can exploit to obtain unauthorized access to confidential information. As products with access to sensitive data are becoming more prevalent, the number of potentially exploitable systems is also increasing, resulting in a greater need for automated software vetting tools. DARPA recently funded a competition, with millions of dollars in prize money, to further research focusing on automated vulnerability finding and patching, showing the importance of research in this area. Current techniques for finding potential bugs include static, dynamic, and concolic analysis systems, which each having their own advantages and disadvantages. A common limitation of systems designed to create inputs which trigger vulnerabilities is that they only find shallow bugs and struggle to exercise deeper paths in executables. We present Driller, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs. Inexpensive fuzzing is used to exercise compartments of an application, while concolic execution is used to generate inputs which satisfy the complex checks separating the compartments. By combining the strengths of the two techniques, we mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing. Driller uses selective concolic execution to explore only the paths deemed interesting by the fuzzer and to generate inputs for conditions that the fuzzer cannot satisfy. We evaluate Driller on 126 applications released in the qualifying event of the DARPA Cyber Grand Challenge and show its efficacy by identifying the same number of vulnerabilities, in the same time, as the top-scoring team of the qualifying event.

778 citations