Parametric query optimization for linear and piecewise linear cost functions
read more
Citations
Adaptive Query Processing
Robust query processing through progressive optimization
Staying FIT: efficient load shedding techniques for distributed stream processing
Learning to Optimize Join Queries With Deep Reinforcement Learning.
Automatically and adaptively determining execution plans for queries with parameter markers
References
Lectures on Polytopes
Access path selection in a relational database management system
Computational geometry : an introduction through randomized algorithms
The Volcano optimizer generator: extensibility and efficient search
Optimization of dynamic query evaluation plans
Related Papers (5)
Frequently Asked Questions (15)
Q2. What are the future works in "Parametric query optimization for linear and piecewise linear cost functions" ?
Future work includes implementing or using polyhedron handling code that minimizes overheads, and characterizing the performance of their algorithms.
Q3. What is the future work of the project?
Future work includes implementing or using polyhedron handling code that minimizes overheads, and characterizing the performance of their algorithms.
Q4. How can the authors apply the cost polytope algorithm in the piecewise linear case?
The cost polytope algorithm for the linear case can be applied in the piecewise linear case by pre-partitioning the parameter space in a way that every cost function is linear in every partition.
Q5. how many calls to the optimizer are necessary to check if a given set of plans?
A total of v calls to the optimizer are necessary and sufficient to check if a given set of plans, with a parameter space decomposition defining a region of optimality for each plan, is the POSP .
Q6. What is the cost function of a plan in n+1?
The authors can think of the cost function as a hyperplane in n+1 whose equation is given bysn+1 = c1s1 + c2s2 + · · · + cnsn + cn+1 where sn+1 denotes the cost of the plan.
Q7. What is the procedure for carving out the optimal region for a new plan?
in Ganguly’s algorithm, the procedure for carving out the optimal region for a new plan (given the existing decomposition) begins with the parameter space polytope and chips out the optimal region of each plan in the existing decomposition.
Q8. What is the way to solve the parametric query optimization problem?
The authors propose an approach for parametric query optimization with piecewise linear cost functions, based on extending existing optimization algorithms to use cost functions in place of costs.
Q9. What is the probability that a cost hyperplane touches it?
If the coefficient vectors8 of the cost functions are distributed uniformly in a unit sphere then the probability that a cost hyperplane not defining the final cost polytope touches it is zero; see[HS02] for details.
Q10. what is the parametric query optimization problem?
The parametric query optimization (PQO) problem is defined as follows [Gan98]: Let s1, s2, . . . , sn denote n parameters, where each si quantifies some cost parameter.
Q11. What is the definition of a parametric optimal set of plans?
}A parametric optimal set of plans (POSP ) is a minimal subset of MPSP that includes at least one optimal plan for each point in the parameter space.
Q12. What is the cost polytope construction algorithm?
A polytope construction algorithm is given a set of halfspaces and the algorithm intersects the halfspaces to construct the desired polytope.
Q13. What is the way to save on the number of calls to the optimizer?
the authors can then save on the number of calls to the optimizer by intersecting the intermediate cost polytope with all the returned hyperplanes.
Q14. What is the way to calculate the cost hyperplane?
The authors assumed that the conventional optimizer returns one of the optimal plans at a given point in the parameter space, along with its cost hyperplane.
Q15. How can one obtain the optimality region of a polytope?
Each facet of this polytope corresponds to a plan in the parametric optimal set of plans (POSP ) and one can obtain its optimality region by projecting the facet on the parameter space ( n).