scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Materialized view replacement using Markov's analysis

11 Sep 2014-pp 771-775
TL;DR: In this paper, a new materialized view maintenance scheme is proposed using markov's analysis to ensure consistent performance, which is based on Markov analysis is chosen here to predict steady state probability over initial probability.
Abstract: Materialized view is used in large data centric applications to expedite query processing. The efficiency of materialized view depends on degree of result found against the queries over the existing materialized views. Materialized views are constructed following different methodologies. Thus the efficacy of the materialized views depends on the methodology based on which these are formed. Construction of materialized views are often time consuming and moreover after a certain time the performance of the materialized views degrade when the nature of queries change. In this situation either new materialized views could be constructed from scratch or the existing views could be upgraded. Fresh construction of materialized views has higher time complexity hence the modification of the existing views is a better solution. Modification process of materialized view is classified under materialized view maintenance scheme. Materialized view maintenance is a continuous process and the system could be tuned to ensure a constant rate of performance. If a materialized view construction process is not supported by materialized view maintenance scheme that system would suffer from performance degradation. In this paper a new materialized view maintenance scheme is proposed using markov’s analysis to ensure consistent performance. Markov’s analysis is chosen here to predict steady state probability over initial probability. Keywords— View maintenance; markov; steady state probability;
Citations
More filters
Proceedings ArticleDOI
04 Dec 2014
TL;DR: In this paper authors adopt an incremental view maintenance policy based on attribute affinity to update the materialized views at run time without using extra space and minimizing the data transfer between the secondary memory and primary memory (where the active materialization views reside).
Abstract: View materialization is being practiced over several years in large data centric applications like database, data warehouse, data mining etc. for faster query processing. Initially the materialized views are formed based on some methodologies, however the performance (hit-miss ratio) of the materialized views may degrade after certain time if the incoming query pattern changes. This situation could be handled efficiently by employing a view maintenance scheme which works dynamically during query execution at run time. As these materialized views involves huge amount of data, consideration of time and space complexity during the maintenance process plays an important role. In this paper authors adopt an incremental view maintenance policy based on attribute affinity to update the materialized views at run time without using extra space and minimizing the data transfer between the secondary memory and primary memory (where the active materialized views reside). This in turn reduces time complexity and supports incremental maintenance eliminating the requirement of full replacement of existing materialized views.

4 citations

DissertationDOI
20 Apr 2017
TL;DR: This thesis work defines a process to determine the minimal set of workload queries and the set of views to materialize and proposes a dynamic process that allows users to upgrade the CoDe model with a context-aware editor, and build an optimized lattice structure able to minimize the effort to recalculate it.
Abstract: Data warehouse systems aim to support decision making by providing users with the appropriate information at the right time. This task is particularly challenging in business contexts where large amount of data is produced at a high speed. To this end, data warehouses have been equipped with Online Analytical Processing tools that help users to make fast and precise decisions through the execution of complex queries. Since the computation of these queries is time consuming, data warehouses precompute a set of materialized views answering to the workload queries. This thesis work defines a process to determine the minimal set of workload queries and the set of views to materialize. The set of queries is represented by an optimized lattice structure used to select the views to be materialized according to the processing time costs and the view storage space. The minimal set of required Online Analytical Processing queries is computed by analyzing the data model defined with the visual language CoDe (Complexity Design). The latter allows to conceptually organize the visualization of data reports and to generate visualizations of data obtained from data-mart queries. CoDe adopts a hybrid modeling process combining two main methodologies: user-driven and data-driven. The first aims to create a model according to the user knowledge, requirements, and analysis needs, whilst the latter has in charge to concretize data and their relationships in the model through Online Analytical Processing queries. Since the materialized views change over time, we also propose a dynamic process that allows users to (i) upgrade the CoDe model with a context-aware editor, (ii) build an optimized lattice structure able to minimize the effort to recalculate it, and (iii) propose the new set of views to materialize. Moreover, the process applies a Markov strategy

2 citations

References
More filters
Book
01 Jun 1999
TL;DR: A new data model is proposed, the chronicle model, which permits the capture, within the data model, of many computations common to transactional data recording systems and develops languages that ensure a low maintenance complexity independent of the sequence sizes.

133 citations

Proceedings ArticleDOI
27 May 1997
TL;DR: This work addresses some issues related to determining this set of shared views to be materialized in order to achieve the best combination of good performance and low maintenance, and provides an algorithm for achieving this goal.
Abstract: Data warehouses are accessed by different queries with different frequencies. The portions of data accessed by a query can be treated as a view. When these views are related to each other and defined over overlapping portions of the base data, then it may be more efficient not to materialize all the views, but rather to materialize certain "shared views" from which the query results can be generated. We address some issues related to determining this set of shared views to be materialized in order to achieve the best combination of good performance and low maintenance, and provide an algorithm for achieving this goal.

74 citations

Proceedings ArticleDOI
02 Aug 1999
TL;DR: The approach is to regard the complex changes done to a view definition after synchronization as an atomic unit; another is to exploit knowledge of how the view definition was synchronized, especially the containment information between the old and new views.
Abstract: While current view technology assumes that information systems (ISs) do not change their schemas, our Evolvable View Environment (EVE) project addresses this problem by evolving the view definitions affected by IS schema changes, which we call view synchronization. In EVE, the view synchronizer rewrites the view definitions by replacing view components with suitable components from other ISs. However, after such a view redefinition process, the view extents, if materialized, must also be brought up to date. In this paper, we propose strategies to address this incremental adaptation of the view extent after view synchronization. One key idea of our approach is to regard the complex changes done to a view definition after synchronization as an atomic unit; another is to exploit knowledge of how the view definition was synchronized, especially the containment information between the old and new views. Our techniques would successfully adapt views under the unavailability of base relations, while currently known maintenance strategies from the literature would fail.

37 citations

Proceedings ArticleDOI
15 May 2009
TL;DR: This paper focuses on solving the view materialization and selection problem using a genetic algorithm approach subject to both of disk space and maintenance considerations.
Abstract: Data Warehouse is an approach in which data from multiple heterogeneous and distributed operational systems(OLTP) are extracted, transformed and loaded into a central repository for the purpose of decision making. Since such databases stores huge amounts of historical data, it is necessary to devise methods by which complex OLAP queries can be answered as fast as possible. OLAP is an approach which facilitates analytical queries accessing multidimensional databases. Using materialized views as pre-computed results for time-consuming queries is a common method for speeding up analytical queries. However, some constraints do not allow the systems to create all possible views. Therefore, one of the crucial decisions that data warehouse designers need to make is in the selection of the right set of views to be materialized.This paper focuses on solving the view materialization and selection problem using a genetic algorithm approach subject to both of disk space and maintenance considerations.

23 citations

Book ChapterDOI
12 Dec 2011
TL;DR: This work proposes here a system that distinguishes between the maintenance of of logical contents and physical structure, and results in support for concurrent high update rates and immediate, index-based query processing with correct transaction semantics.
Abstract: Maintenance of secondary indexes and materialized views can cause the latency and bandwidth of concurrent information capture to degrade by orders of magnitude. In order to preserve performance during temporary bursts of update activity, e.g., during load operations, many systems therefore support deferred maintenance, at least for materialized views. However, deferring maintenance means that index or view contents may become out-of-date. In such cases, a seemingly benign choice among alternative query execution plans affects whether query results represent the latest database contents. We propose here a system that distinguishes between the maintenance of of logical contents and physical structure. This distinction lets us compensate for deferred logical maintenance operations while minimizing the impact of deferred physical maintenance operations, and results in support for concurrent high update rates and immediate, index-based query processing with correct transaction semantics.

13 citations