Topic
Average-case complexity
About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.
Papers published on a yearly basis
Papers
More filters
••
18 Jun 2012TL;DR: It is shown that, at each iteration, the step size computed by this Mehrotra's predictor-corrector variant algorithm is bounded below, for n≥2, by $\frac{1}{200 n^4};$ consequently proving that the algorithm has O(n4 |log(e)|) iteration complexity.
Abstract: Based on the good computational results of the feasible version of the Mehrotra's predictor-corrector variant algorithm presented by Bastos and Paixao, in this paper we discuss its complexity. We prove the efficiency of this algorithm by showing its polynomial complexity and, consequently, its Q-linearly convergence.
We start by proving some technical results which are used to discuss the step size estimate of the algorithm.
It is shown that, at each iteration, the step size computed by this Mehrotra's predictor-corrector variant algorithm is bounded below, for n≥2, by $\frac{1}{200 n^4};$ consequently proving that the algorithm has O(n4 |log(e)|) iteration complexity.
4 citations
01 Jan 2000
TL;DR: This thesis investigates variable complexity algorithms and proposes two fast algorithms based on fast distance metric computation or fast matching approaches that allow computational scalability in distance computation with graceful degradation in the overall image quality.
Abstract: In this thesis we investigate variable complexity algorithms. The complexities of these algorithms are input-dependent, i.e., the type of input determines the complexity required to complete the operation. The key idea is to enable the algorithm to classify the inputs so that unnecessary operations can be pruned. The goal of the design of the variable complexity algorithm is to minimize the average complexity over all possible input types, including the cost of classifying the inputs. We study two of the fundamental operations in standard image/video compression, namely, the discrete cosine transform (DCT) and motion estimation (ME).
We first explore variable complexity in inverse DCT by testing for zero inputs. The test structure can also be optimized for minimal total complexity for a given inputs statistics. In this case, the larger the number of zero coefficients, i.e., the coarser the quantization stepsize, the greater the complexity reduction. As a consequence, tradeoffs between complexity and distortion can be achieved.
For direct DCT we propose a variable complexity fast approximation algorithm. The variable complexity part computes only DCT coefficients that will not be quantized to zeros according to the classification results (in addition the quantizer can benefit from this information by by-passing its operations for zero coefficients). The classification structure can also be optimized for a given input statistics. On the other hand, the fast approximation part approximates the DCT coefficients with much less complexity. The complexity can be scaled, i.e., it allows more complexity reduction at lower quality coding, and can be made quantization-dependent to keep the distortion degradation at a certain level.
In video coding, ME is the part of the encoder that requires the most complexity and therefore achieving significant complexity reduction in ME has always been a goal in video coding research. We propose two fast algorithms based on fast distance metric computation or fast matching approaches. Both of our algorithms allow computational scalability in distance computation with graceful degradation in the overall image quality. The first algorithm exploits hypothesis testing in fast metric computation whereas the second algorithm uses thresholds obtained from partial distances in hierarchical candidate elimination. (Abstract shortened by UMI.)
4 citations
••
TL;DR: An alternative complexity analysis of some integer programming algorithms is considered, which is based on measures of "intrinsic difficulty," which extends to the setup of integer programming some notions of condition measures which have been developed for convex optimization.
Abstract: Integer programming algorithms have some kind of exponential complexity in the worst case. However, it is also observed that data instances of similar sizes might have very different practical complexity when solved by computer algorithms. This paper considers an alternative complexity analysis of some integer programming algorithms, which is based on measures of "intrinsic difficulty." The work extends to the setup of integer programming some notions of condition measures which have been developed for convex optimization. We present bounds on the so-called lattice width of polyhedra and address the impact on the complexity of integer programming algorithms like Lenstra's algorithm as well as branch and bound algorithms. The condition measures introduced here reflect shape and spatial orientation factors which are not fully captured by the traditional combinatorial analysis.
4 citations
•
TL;DR: The average-case complexity of a branch-and-bound algorithms for Minimum Dominating Set problem in random graphs in the G(n,p) model is studied and phase transitions between subexponential and exponential average- case complexities are identified.
Abstract: The average-case complexity of a branch-and-bound algorithms for Minimum Dominating Set problem in random graphs in the G(n,p) model is studied. We identify phase transitions between subexponential and exponential average-case complexities, depending on the growth of the probability p with respect to the number n of nodes.
4 citations