scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Proceedings ArticleDOI
14 May 2012
TL;DR: This paper provides a reduced-order algorithm, the Extended-Force-Propagator Algorithm (EFPA), for the computation of operational-space inertia matrices in branched kinematic trees, and demonstrates efficiency gains over sparse methods for some topologies.
Abstract: This paper provides a reduced-order algorithm, the Extended-Force-Propagator Algorithm (EFPA), for the computation of operational-space inertia matrices in branched kinematic trees. The algorithm accommodates an operational space of multiple end-effectors, and is the lowest-order algorithm published to date for this computation. The key feature of this algorithm is the explicit calculation and use of matrices that propagate a force across a span of several links in a single operation. This approach allows the algorithm to achieve a computational complexity of O(N +md+m2) where N is the number of bodies, m is the number of end-effectors, and d is the depth of the system's connectivity tree. A detailed cost comparison is provided to the propagation algorithms of Rodriguez et al. (complexity O(N + dm2)) and to the sparse factorization methods of Featherstone (complexity O(nd2 + md2 + m2d)). For the majority of examples considered, our algorithm outperforms the previous best recursive algorithm, and demonstrates efficiency gains over sparse methods for some topologies.

12 citations

Book ChapterDOI
TL;DR: A measure of partitioning quality is introduced and its application in problem classification is highlighted and shows that decomposition increases the overall complexity of the problem, which can be taken as the measure’s viability indicator.
Abstract: Large scale problems need to be decomposed for tractability purposes. The decomposition process needs to be carefully managed to minimize the interdependencies between sub-problems. A measure of partitioning quality is introduced and its application in problem classification is highlighted. The measure is complexity based (real complexity) and can be employed for both disjoint and overlap decompositions. The measure shows that decomposition increases the overall complexity of the problem, which can be taken as the measure’s viability indicator. The real complexity can also indicate the decomposability of the design problem, when the complexity of the whole after decomposition is less than the complexity sum of sub-problems. As such, real complexity can specify the necessary paradigm shift from decomposition based problem solving to evolutionary and holistic problem solving.

12 citations

Journal Article
TL;DR: It is shown that every regular language L has either constant, logarithmic or linear two-party communication complexity (in a worst-case partition sense) and a similar trichotomy for simultaneous and probabilistic communication complexity is proved.
Abstract: We show that every regular language L has either constant, logarithmic or linear two-party communication complexity (in a worst-case partition sense). We prove a similar trichotomy for simultaneous communication complexity and a quadrichotomy for probabilistic communication complexity.

12 citations

Dissertation
01 Jan 2012
TL;DR: This thesis investigates the power and limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines; significantly improve and extend past results on limits; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation.
Abstract: Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1 – e in computing the Boolean function f on input distribution μ, then for α ≤ 1, every αeTk-query algorithm has success probability at most (2αe(1 – e))k in computing the k-fold direct product f ⊗k correctly on k independent inputs from μ. In light of examples due to Shaltiel, this statement gives an essentially optimal tradeoff between the query bound and the error probability. Using this DPT, we show that for an absolute constant α > 0, the worst-case success probability of any αR 2(f)k-query randomized algorithm for f⊗k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O( bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f ⊗k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1, …, fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by Savický. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)

12 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732