Conference
International Symposium on Software Reliability Engineering
About: International Symposium on Software Reliability Engineering is an academic conference. The conference publishes majorly in the area(s): Software quality & Software reliability testing. Over the lifetime, 1783 publications have been published by the conference receiving 36915 citations.
Topics: Software quality, Software reliability testing, Software system, Software, Software development
Papers published on a yearly basis
Papers
More filters
••
02 Nov 1997TL;DR: This proposed hybrid technique combines modification, minimization and prioritization-based selection using a list of source code changes and the execution traces from test cases run on previous versions to identify a representative subset of all test cases that may result in different output behavior on the new software version.
Abstract: The purpose of regression testing is to ensure that changes made to software, such as adding new features or modifying existing features, have not adversely affected features of the software that should not change. Regression testing is usually performed by running some, or all, of the test cases created to test modifications in previous versions of the software. Many techniques have been reported on how to select regression tests so that the number of test cases does not grow too large as the software evolves. Our proposed hybrid technique combines modification, minimization and prioritization-based selection using a list of source code changes and the execution traces from test cases run on previous versions. This technique seeks to identify a representative subset of all test cases that may result in different output behavior on the new software version. We report our experience with a tool called ATAC (Automatic Testing Analysis tool in C) which implements this technique.
424 citations
••
01 Oct 2016TL;DR: A detailed review and evaluation of six state-of-the-art log-based anomaly detection methods, including three supervised methods and three unsupervised methods, and also releases an open-source toolkit allowing ease of reuse.
Abstract: Anomaly detection plays an important role in managementof modern large-scale distributed systems. Logs, whichrecord system runtime information, are widely used for anomalydetection. Traditionally, developers (or operators) often inspectthe logs manually with keyword search and rule matching. Theincreasing scale and complexity of modern systems, however, make the volume of logs explode, which renders the infeasibilityof manual inspection. To reduce manual effort, many anomalydetection methods based on automated log analysis are proposed. However, developers may still have no idea which anomalydetection methods they should adopt, because there is a lackof a review and comparison among these anomaly detectionmethods. Moreover, even if developers decide to employ ananomaly detection method, re-implementation requires a nontrivialeffort. To address these problems, we provide a detailedreview and evaluation of six state-of-the-art log-based anomalydetection methods, including three supervised methods and threeunsupervised methods, and also release an open-source toolkitallowing ease of reuse. These methods have been evaluated ontwo publicly-available production log datasets, with a total of15,923,592 log messages and 365,298 anomaly instances. Webelieve that our work, with the evaluation results as well asthe corresponding findings, can provide guidelines for adoptionof these methods and provide references for future development.
378 citations
••
24 Oct 1995
TL;DR: A tool which supports execution slicing and dicing based on test cases is described and an experiment that uses heuristic techniques in fault localization is reported.
Abstract: Finding a fault in a program is a complex process which involves understanding the program's purpose, structure, semantics, and the relevant characteristics of failure producing tests. We describe a tool which supports execution slicing and dicing based on test cases. We report the results of an experiment that uses heuristic techniques in fault localization.
305 citations
••
04 Nov 1998TL;DR: A distributed data collection tool used to collect operating system resource usage and system activity data at regular intervals, from networked UNIX workstations and proposes a metric: "estimated time to exhaustion", which is calculated using well known slope estimation techniques.
Abstract: The phenomenon of software aging refers to the accumulation of errors during the execution of the software which eventually results in it's crash/hang failure. A gradual performance degradation may also accompany software aging. Pro-active fault management techniques such as "software rejuvenation" (Y. Huang et al., 1995) may be used to counteract aging if it exists. We propose a methodology for detection and estimation of aging in the UNIX operating system. First, we present the design and implementation of an SNMP based, distributed monitoring tool used to collect operating system resource usage and system activity data at regular intervals, from networked UNIX workstations. Statistical trend detection techniques are applied to this data to detect/validate the existence of aging. For quantifying the effect of aging in operating system resources, we propose a metric: "estimated time to exhaustion", which is calculated using well known slope estimation techniques. Although the distributed data collection tool is specific to UNIX, the statistical techniques can be used for detection and estimation of aging in other software as well.
298 citations
••
02 Nov 2004TL;DR: This paper applies five bug finding tools, specifically Bandera, ESC/Java 2, FindBugs, JLint, and PMD, to a variety of Java programs, and proposes a meta-tool that combines the output of the tools together, looking for particular lines of code, methods, and classes that many tools warn about.
Abstract: Bugs in software are costly and difficult to find and fix. In recent years, many tools and techniques have been developed for automatically finding bugs by analyzing source code or intermediate code statically (at compile time). Different tools and techniques have different tradeoffs, but the practical impact of these tradeoffs is not well understood. In this paper, we apply five bug finding tools, specifically Bandera, ESC/Java 2, FindBugs, JLint, and PMD, to a variety of Java programs. By using a variety of tools, we are able to cross-check their bug reports and warnings. Our experimental results show that none of the tools strictly subsumes another, and indeed the tools often find nonoverlapping bugs. We discuss the techniques each of the tools is based on, and we suggest how particular techniques affect the output of the tools. Finally, we propose a meta-tool that combines the output of the tools together, looking for particular lines of code, methods, and classes that many tools warn about.
282 citations