Measuring empirical computational complexity
read more
Citations
SPEED: precise and efficient static estimation of program computational complexity
Control-flow refinement and progress invariants for bound analysis
PerfFuzz: automatically generating pathological inputs
Empirical hardness models: Methodology and a case study on combinatorial auctions
Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression
References
Introduction to Algorithms
Compilers: Principles, Techniques, and Tools
Probability theory : the logic of science
Mathematical Statistics and Data Analysis
Accuracy and stability of numerical algorithms
Related Papers (5)
Frequently Asked Questions (9)
Q2. What are the functions that can be used to allocate performance cost to different callers?
Contexts are useful for apportioning performance cost of a library to different callers or even costs of data structure operations to different instances of the data structure.
Q3. What is the way to mark function invocations?
CF-TrendProf allows the user to mark function invocations with contexts basedon the call graph or on arbitrary runtime data values.
Q4. What is the way to understand the relationship between performance and workload features?
the log-log scatter plot would seem to be more appropriate for understanding the relationship between performance and workload features since, unlike a linear-linear scatter plot, a constant relative error corresponds to constant distance.
Q5. What is the danger of allowing a model to grow gratuitously complex?
there is a danger that if the authors allow their models to grow gratuitously complex, that they will overfit the training data and not generalize to other data.
Q6. What is the mechanism to verify bounds?
The mechanism is to annotate each function with types that describe how many steps the function takes to compute its result and use a dependent type system to verify these bounds.
Q7. How would a user realize an ideal situation in a larger algorithm?
In order to realize such an ideal situation in the context of a larger algorithm, a user would likely have to provide CF-TrendProf with suitable context annotations and invocation features.
Q8. What is the effect of the CF-TrendProf on the performance of the benchmark?
This dependence of performance on such subtle properties means that the apparent scalability of an algorithm that CF-TrendProf measures is as much a consequence of the code being measured, as it is of the empirical distribution of workloads.
Q9. What is the purpose of reducing the number of unknown entities?
In a setting where performance need not be a clean function of workload features, reducing the number of unknown entities is useful.