scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Posted Content
TL;DR: This paper provides an algorithm for approximating the information complexity of an arbitrary function $f$ to within any additive error $\alpha > 0$, thus resolving an open question as to whether information complexity is computable.
Abstract: The information complexity of a function $f$ is the minimum amount of information Alice and Bob need to exchange to compute the function $f$ In this paper we provide an algorithm for approximating the information complexity of an arbitrary function $f$ to within any additive error $\alpha > 0$, thus resolving an open question as to whether information complexity is computable In the process, we give the first explicit upper bound on the rate of convergence of the information complexity of $f$ when restricted to $b$-bit protocols to the (unrestricted) information complexity of $f$

12 citations

ReportDOI
01 Jun 1990
TL;DR: This paper reviews some of the recent results that have emerged in the study of average-case complexity, including a description of Levin's framework for studying average- case complexity, as well as his proof of the existence of complete problems for a class of distributional problems.
Abstract: : A primary contribution of theoretical computer science has been the identification of the so-called NP-complete problems, a well-known class of problems provably equivalent to one another in worst-case computational complexity, modulo polynomial-time computation. These problems, being the hardest in the class NP, are widely believed to be unsolvable by any polynomial- time algorithm, and indeed, no sub-exponential time algorithm is known for any NP-complete problem. This paper reviews some of the recent results that have emerged in the study of average-case complexity. Included is a description of Levin's framework for studying average-case complexity, as well as his proof of the existence of complete problems for a class of distributional problems. The paper also presents some new results, including a natural and more liberal extension of Levin's model, in addition to a partial characterization of the relationships among the new average-case complexity classes. (kr)

12 citations

Journal ArticleDOI
TL;DR: Borders on the average-case number of stacks required for sorting sequential or parallel Queuesort or Stacksort are proved and the incompressibility method is developed.
Abstract: Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

12 citations

Journal Article
TL;DR: In this paper, a broad framework is proposed to study the computational intractability assumptions inherent to cryptography. But the computational assumptions are not considered in this paper. And they are not assumed to be inherent in all the computations in this framework, since the framework contains a large variety of cryptographic tasks.
Abstract: Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question. We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi-party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study. In our detailed investigation of large classes of 2-party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of one-way functions (OWF) or of semi-honest (equivalently, standalone-secure) oblivious transfer protocols (sh-OT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework. If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.

12 citations

Journal ArticleDOI
TL;DR: A new formulation of the complexity profile is presented, which expands its possible application to high-dimensional real-world and mathematically defined systems and defines a class of related complexity profile functions for a given system, demonstrating the generality of the formalism.
Abstract: Quantifying the complexity of systems consisting of many interacting parts has been an important challenge in the field of complex systems in both abstract and applied contexts. One approach, the complexity profile, is a measure of the information to describe a system as a function of the scale at which it is observed. We present a new formulation of the complexity profile, which expands its possible application to high-dimensional real-world and mathematically defined systems. The new method is constructed from the pairwise dependencies between components of the system. The pairwise approach may serve as both a formulation in its own right and a computationally feasible approximation to the original complexity profile. We compare it to the original complexity profile by giving cases where they are equivalent, proving properties common to both methods, and demonstrating where they differ. Both formulations satisfy linear superposition for unrelated systems and conservation of total degrees of freedom (sum rule). The new pairwise formulation is also a monotonically non-increasing function of scale. Furthermore, we show that the new formulation defines a class of related complexity profile functions for a given system, demonstrating the generality of the formalism.

12 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732