scispace - formally typeset
Search or ask a question
Conference

International Workshop on Testing Database Systems 

About: International Workshop on Testing Database Systems is an academic conference. The conference publishes majorly in the area(s): Query optimization & Database testing. Over the lifetime, 63 publications have been published by the conference receiving 916 citations.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
29 Jun 2009
TL;DR: This paper argues that traditional benchmarks (like the TPC benchmarks) are not sufficient for analyzing the novel cloud services and presents some initial ideas how such a new benchmark should look like that fits better to the characteristics of cloud computing (e.g., scalability, pay-per-use and fault-tolerance).
Abstract: Traditionally, the goal of benchmarking a software system is to evaluate its performance under a particular workload for a fixed configuration. The most prominent examples for evaluating transactional database systems as well as other components on top (such as a application-servers or web-servers) are the various TPC benchmarks.In this paper we argue that traditional benchmarks (like the TPC benchmarks) are not sufficient for analyzing the novel cloud services. Moreover, we present some initial ideas how such a new benchmark should look like that fits better to the characteristics of cloud computing (e.g., scalability, pay-per-use and fault-tolerance). The main challenge of such a new benchmark is to make the reported results comparable because different providers offer different services with different capabilities and guarantees.

171 citations

Proceedings ArticleDOI
13 Jun 2011
TL;DR: The definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark, which bridges the gap between the established single-workload suites of TPC-C for OLTP and T PC-H for OLAP, and executes a complex mixed workload.
Abstract: While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based on the order entry processing of TPC-C and a corresponding TPC-H-equivalent OLAP query suite run in parallel on the same tables in a single database system. As it is derived from these two most widely used TPC benchmarks, the CH-benCHmark produces results highly relevant to both hybrid and classic single-workload systems.

133 citations

Proceedings ArticleDOI
13 Jun 2008
TL;DR: In this article, the authors describe a new approach to generate inputs to database applications that satisfy certain properties specified by the tester and also cause queries to return nonempty result sets and cause updates and inserts to execute without violating uniqueness or referential integrity constraints.
Abstract: This paper describes a new approach to generating inputs to database applications. The goal is to generate inputs that satisfy certain properties specified by the tester and that also cause queries to return non-empty result sets and cause updates and inserts to execute without violating uniqueness or referential integrity constraints. Based on the SQL statements in the application, test generation queries are generated; execution of these queries yields test inputs with the desired properties. The test generation algorithm is described and illustrated by an example. The technique has been implemented and experimental evaluation is in progress.

38 citations

Proceedings ArticleDOI
21 May 2012
TL;DR: A framework to quantify an optimizer's accuracy for a given workload is developed that makes use of the fact that optimizers expose switches or hints that let users influence the plan choice and generate plans other than the default plan.
Abstract: The accuracy of a query optimizer is intricately connected with a database system performance and its operational cost: the more accurate the optimizer's cost model, the better the resulting execution plans. Database application programmers and other practitioners have long provided anecdotal evidence that database systems differ widely with respect to the quality of their optimizers, yet, to date no formal method is available to database users to assess or refute such claims.In this paper, we develop a framework to quantify an optimizer's accuracy for a given workload. We make use of the fact that optimizers expose switches or hints that let users influence the plan choice and generate plans other than the default plan. Using these implements, we force the generation of multiple alternative plans for each test case, time the execution of all alternatives and rank the plans by their effective costs. We compare this ranking with the ranking of the estimated cost and compute a score for the accuracy of the optimizer.We present initial results of an anonymized comparisons for several major commercial database systems demonstrating that there are in fact substantial differences between systems. We also suggest ways to incorporate this knowledge into the commercial development process.

34 citations

Proceedings ArticleDOI
29 Jun 2009
TL;DR: It is argued that query interactions can have a significant impact on database system performance, and it is important to take these interactions into account when characterizing workloads, designing test cases, or developing performance tuning algorithms for database systems.
Abstract: Database workloads consist of mixes of queries that run concurrently and interact with each other. In this paper, we demonstrate that query interactions can have a significant impact on database system performance. Hence, we argue that it is important to take these interactions into account when characterizing workloads, designing test cases, or developing performance tuning algorithms for database systems. To capture and model query interactions, we propose using an experimental approach that is based on sampling the space of possible interactions and fitting statistical models to the sampled data. We discuss using such an approach for database testing and tuning, and we present some opportunities and research challenges.

34 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
201310
201212
20118
20108
200913
200812