scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

R2Fix: Automatically Generating Bug Fixes from Bug Reports

18 Mar 2013-pp 282-291
TL;DR: R2Fix combines past fix patterns, machine learning techniques, and semantic patch generation techniques to fix bugs automatically and could have shortened and saved up to an average of 63 days of bug diagnosis and patch generation time.
Abstract: Many bugs, even those that are known and documented in bug reports, remain in mature software for a long time due to the lack of the development resources to fix them. We propose a general approach, R2Fix, to automatically generate bug-fixing patches from free-form bug reports. R2Fix combines past fix patterns, machine learning techniques, and semantic patch generation techniques to fix bugs automatically. We evaluate R2Fix on three projects, i.e., the Linux kernel, Mozilla, and Apache, for three important types of bugs: buffer overflows, null pointer bugs, and memory leaks. R2Fix generates 57 patches correctly, 5 of which are new patches for bugs that have not been fixed by developers yet. We reported all 5 new patches to the developers; 4 have already been accepted and committed to the code repositories. The 57 correct patches generated by R2Fix could have shortened and saved up to an average of 63 days of bug diagnosis and patch generation time.

Summary (1 min read)

Jump to:  and [8 Conclusions 29]

8 Conclusions 29

  • The numbers are the total number of fix patterns for each bug type.
  • AVG denotes that the number in the cell is the average.
  • To extract the line number for memory leak bugs, R2Fix takes the number after “:” or “at line” in the bug report.
  • The developers first need to understand this bug report by reading the relevant code together with this report: the buffer state contains only 4 bytes, but 5 bytes, “off \0”, was written to the buffer, where denotes one space character and the single character ‘\0’ is needed to mark the end of the string.
  • Developers often need to fix more bugs than their time and resources allow [6].

Did you find this useful? Give us your feedback

Figures (9)

Content maybe subject to copyright    Report

R2Fix: Automatically Generating Bug
Fixes from Bug Reports
by
Chen Liu
A thesis
presented to the University of Waterloo
in fulfillment of the
thesis requirement for the degree of
Master of Applied Science
in
Electrical and Computer Engineering
Waterloo, Ontario, Canada, 2012
c
Chen Liu 2012

I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including
any required final revisions, as accepted by my examiners.
I understand that my thesis may be made electronically available to the public.
ii

Abstract
Many bugs, even those that are known and documented in bug reports, remain in mature
software for a long time due to the lack of the development resources to fix them. We propose
a general approach, R2Fix, to automatically generate bug-fixing patches from free-form bug
reports. R2Fix combines past fix patterns, machine learning techniques, and semantic patch
generation techniques to fix bugs automatically. We evaluate R2Fix on three large and popular
software projects, i.e., the Linux kernel, Mozilla, and Apache, for three important types of bugs:
buffer overflows, null pointer bugs, and memory leaks. R2Fix generates 60 patches correctly,
5 of which are new patches for bugs that have not been fixed by developers yet. We reported
all 5 new patches to the developers; 4 have already been accepted and committed to the code
repositories. The 60 correct patches generated by R2Fix could have shortened and saved an
average of 68 days of bug diagnosis and patch generation time.
iii

Acknowledgements
I would like to take the opportunity to express my deepest gratitude to my supervisor Prof.
Lin Tan. During the study, she supported me in every aspect. I would like to thank for her
enthusiasm and unwavering support, for the numerous useful guidance, discussions, feedback
and encouragement. The things I learned from her will be extremely beneficial for my future
development.
I am thankful to readers of the thesis, Prof. Patrick Lam and Prof. Mahesh V. Tripunitara, for
spending their valuable time to review the thesis and give valuable comments.
Thanks to our research group members, especially Tian and Jinqiu. We have been in the
group together for more than one year. I enjoyed discussing with them about topics in scientific
research. Thank them for inspirations and good ideas.
Lastly, and most importantly, I would like to acknowledge my family. My dear mother, the
first teacher and the role model in my life, gives me confidence to explore new things, especially
in a different country far away from my homeland. Thanks to her endless support, sacrifice and
patience. My dear father comes next. He taught me how to develop interests in a scientific area.
He taught me how to overcome difficulties in study and how to solve problems in daily life. To
them I dedicate the thesis.
iv

Table of Contents
List of Tables viii
List of Figures ix
1 Introduction 1
1.1 Ideal Goal vs. Realistic Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 A Study of Fix Patterns 6
2.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Fix Pattern Study Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 R2Fix Overview 10
3.1 R2Fix Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 Bug Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Pattern Parameter Extractor . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.3 Patch Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Bug Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 Keyword Search versus Classification . . . . . . . . . . . . . . . . . . . 12
3.2.2 Bug Report Parsing and Classification . . . . . . . . . . . . . . . . . . . 12
v

Citations
More filters
Journal ArticleDOI
TL;DR: The results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.
Abstract: Context: Software startups are newly created companies with no operating history and fast in producing cutting-edge technologies. These companies develop software under highly uncertain conditions, tackling fast-growing markets under severe lack of resources. Therefore, software startups present a unique combination of characteristics which pose several challenges to software development activities. Objective: This study aims to structure and analyze the literature on software development in startup companies, determining thereby the potential for technology transfer and identifying software development work practices reported by practitioners and researchers. Method: We conducted a systematic mapping study, developing a classification schema, ranking the selected primary studies according their rigor and relevance, and analyzing reported software development work practices in startups. Results: A total of 43 primary studies were identified and mapped, synthesizing the available evidence on software development in startups. Only 16 studies are entirely dedicated to software development in startups, of which 10 result in a weak contribution (advice and implications (6); lesson learned (3); tool (1)). Nineteen studies focus on managerial and organizational factors. Moreover, only 9 studies exhibit high scientific rigor and relevance. From the reviewed primary studies, 213 software engineering work practices were extracted, categorized and analyzed. Conclusion: This mapping study provides the first systematic exploration of the state-of-art on software startup research. The existing body of knowledge is limited to a few high quality studies. Furthermore, the results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.

359 citations

01 Jan 2013
TL;DR: This work applied boolean algebras to develop a mathematical model describing the exploits of the NVD data source when using the classification based on the concept of measurement, and proved that she is a measure from the point of view of measure theory.
Abstract: This work is a sequel of the studies in the analysis of vulnerabilities in computer systems. It applied boolean algebras to develop a mathematical model describing the exploits of the NVD data source when using the classification based on the concept of measurement. Quasimeasure has been offered for the boolean algebra, proved that she is a measure from the point of view of measure theory. Shows that the algebraic structure is also algebra of events.

264 citations

Proceedings ArticleDOI
27 May 2018
TL;DR: A new class of approaches, namely program repair techniques, whose key idea is to try to automatically repair software systems by producing an actual fix that can be validated by the testers before it is finally accepted, or that is adapted to properly fit the system.
Abstract: Despite their growing complexity and increasing size, modern software applications must satisfy strict release requirements that impose short bug fixing and maintenance cycles, putting significant pressure on developers who are responsible for timely producing high-quality software. To reduce developers workload, repairing and healing techniques have been extensively investigated as solutions for efficiently repairing and maintaining software in the last few years. In particular, repairing solutions have been able to automatically produce useful fixes for several classes of bugs that might be present in software programs. A range of algorithms, techniques, and heuristics have been integrated, experimented, and studied, producing a heterogeneous and articulated research framework where automatic repair techniques are proliferating. This paper organizes the knowledge in the area by surveying a body of 108 papers about automatic software repair techniques, illustrating the algorithms and the approaches, comparing them on representative examples, and discussing the open challenges and the empirical evidence reported so far.

256 citations


Cites background or methods from "R2Fix: Automatically Generating Bug..."

  • ...R2Fix [120] is a repair technique that generates fixes starting from bug reports filed by users, exploiting a range of pre-defined fix templates associated with a number of real-world bug reports....

    [...]

  • ...To mitigate this issue, some approaches investigated the definition of repair operators inspired from actual fixes written by developers [22], [119], [120], others considered the manual inspection of the fixes as the most useful validation method [55]....

    [...]

  • ...PHPRepair [138] and R2Fix [120] are two exceptions since they use GUI test cases and test cases similar to acceptance test cases derived from user-supplied bug reports, respectively....

    [...]

  • ...This is the case of History-driven repair [118], PAR [22], Relifix [119], and R2Fix [120]....

    [...]

Proceedings ArticleDOI
12 Jul 2018
TL;DR: This paper proposes a novel automatic program repair approach that utilizes both existing patches and similar code and obtains a concrete search space by differencing with similar code snippets and searches within the intersection of the two search spaces.
Abstract: Automated program repair (APR) has great potential to reduce bug-fixing effort and many approaches have been proposed in recent years. APRs are often treated as a search problem where the search space consists of all the possible patches and the goal is to identify the correct patch in the space. Many techniques take a data-driven approach and analyze data sources such as existing patches and similar source code to help identify the correct patch. However, while existing patches and similar code provide complementary information, existing techniques analyze only a single source and cannot be easily extended to analyze both. In this paper, we propose a novel automatic program repair approach that utilizes both existing patches and similar code. Our approach mines an abstract search space from existing patches and obtains a concrete search space by differencing with similar code snippets. Then we search within the intersection of the two search spaces. We have implemented our approach as a tool called SimFix, and evaluated it on the Defects4J benchmark. Our tool successfully fixed 34 bugs. To our best knowledge, this is the largest number of bugs fixed by a single technology on the Defects4J benchmark. Furthermore, as far as we know, 13 bugs fixed by our approach have never been fixed by the current approaches.

239 citations

References
More filters
Book
25 Oct 1999
TL;DR: This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining.
Abstract: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization

20,196 citations

Book
28 May 1999
TL;DR: This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear and provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations.
Abstract: Statistical approaches to processing natural language text have become dominant in recent years This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear The book contains all the theory and algorithms needed for building NLP tools It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications

9,295 citations


"R2Fix: Automatically Generating Bug..." refers methods in this paper

  • ...Following machine learning methods [33], classifiers are built using a small training set of manually labelled bug reports....

    [...]

Journal Article

9,185 citations


Additional excerpts

  • ...%) and precisions (86.0–90.0%)....

    [...]

Proceedings ArticleDOI
08 Dec 2008
TL;DR: A new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs, and significantly beat the coverage of the developers' own hand-written test suite is presented.
Abstract: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them.We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.

2,896 citations

Journal ArticleDOI
TL;DR: The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous.
Abstract: Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.

813 citations


"R2Fix: Automatically Generating Bug..." refers background in this paper

  • ...Mark Weiser studies programmers’ behavior and finds that programmers usually breaks apart large programs into slices for the ease of debugging [43]....

    [...]

Frequently Asked Questions (9)
Q1. What are the contributions mentioned in the paper "R2fix: automatically generating bug fixes from bug reports" ?

The authors propose a general approach, R2Fix, to automatically generate bug-fixing patches from free-form bug reports. The authors evaluate R2Fix on three large and popular software projects, i. e., the Linux kernel, Mozilla, and Apache, for three important types of bugs: buffer overflows, null pointer bugs, and memory leaks. The authors reported all 5 new patches to the developers ; 4 have already been accepted and committed to the code repositories. 

In the future, the authors plan to generate patches for new types of bug reports, and extend R2Fix to take the output of existing bug detection tools as input to improve the effectiveness of patch generation. In other words, R2Fix does not use the “ Comment ” fields of the bug reports, because the authors want to apply R2Fix as soon as a bug is reported to maximize the time and effort that R2Fix can save for developers in fixing bugs. The authors estimate that it will take you approximately 3 minutes to complete this short survey. In other words, R2Fix does not use the “ Comment ” fields of the bug reports, because the authors want to apply R2Fix as soon as a bug is reported to maximize the time and effort that R2Fix can save for developers in fixing bugs. 

The patch deletes the line that writes 5 bytes to buffer state (denoted by - strcpy(state, "off ");), and adds a new line to write only 4 bytes to state (+ strcpy(state, "off");), which fixes the overflow bug. 

The developers first need to understand this bug report by reading the relevant code together with this report: the buffer state contains only 4 bytes, but 5 bytes, “off \\0”, was written to the buffer, where denotes one space character and the single character ‘\\0’ is needed to mark the end of the string. 

My dear mother, the first teacher and the role model in my life, gives me confidence to explore new things, especially in a different country far away from my homeland. 

Developers’ bug-fixing process is primarily manual; therefore the time required for producing a fix and its accuracy depend on the skill and experience of individuals. 

Developers often spend days, weeks, or even months diagnosing the root cause of a bug by reading the relevant source code, using a debugger to observe and modify the program execution on different inputs, etc. 

After a developer determines the root cause, typically the developer needs to figure out how to modify the buggy code to fix the bug, check out the buggy version of the software, apply the fix, and generate the patch. 

I am thankful to readers of the thesis, Prof. Patrick Lam and Prof. Mahesh V. Tripunitara, for spending their valuable time to review the thesis and give valuable comments.