scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2011"


Journal ArticleDOI
TL;DR: The complexity of several constraint satisfaction problems using the quantum adiabatic algorithm in its simplest implementation is determined by studying the size dependence of the gap to the first excited state of "typical" instances and it is found that, at large sizes N, the complexity increases exponentially for all models that are studied.
Abstract: We determine the complexity of several constraint satisfaction problems using the quantum adiabatic algorithm in its simplest implementation. We do so by studying the size dependence of the gap to the first excited state of ``typical'' instances. We find that, at large sizes $N$, the complexity increases exponentially for all models that we study. We also compare our results against the complexity of the analogous classical algorithm WalkSAT and show that the harder the problem is for the classical algorithm, the harder it is also for the quantum adiabatic algorithm.

91 citations


Proceedings ArticleDOI
08 Jun 2011
TL;DR: Lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness are shown, and how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method is described.
Abstract: We show several results related to interactive proof modes of communication complexity. First we show lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness. We describe a general method to prove lower bounds for QMA-communication complexity, and show how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method.Combining a result by Vereshchagin and the pattern matrix method we find a partial function with AM-communication complexity O(\log n), PP-communication complexity \Omega(n^{1/3}), and QMA-communication complexity \Omega(n^{1/6}). Hence in the world of communication complexity noninteractive quantum proof systems are not able to efficiently simulate co-nondeterminism or interaction. These results imply that the related questions in Turing machine complexity theory cannot be resolved by 'algebrizing' techniques. Finally we show that in MA-protocols there is an exponential gap between one-way protocols and two-way protocols for a partial function (this refers to the interaction between Alice and Bob). This is in contrast to nondeterministic, AM-, and QMA-protocols, where one-way communication is essentially optimal.

63 citations


Journal ArticleDOI
30 Sep 2011-Chaos
TL;DR: A geometric approach to complexity based on the principle that complexity requires interactions at different scales of description is developed, which presents a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families.
Abstract: We develop a geometric approach to complexity based on the principle that complexity requires interactions at different scales of description. Complex systems are more than the sum of their parts of any size and not just more than the sum of their elements. Using information geometry, we therefore analyze the decomposition of a system in terms of an interaction hierarchy. In mathematical terms, we present a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families. Within our framework, previously proposed complexity measures find their natural place and gain a new interpretation.

62 citations


Proceedings ArticleDOI
22 Oct 2011
TL;DR: It is proved that if epsilon >= 1/2, then the problem is hard in one of the models, that is, no polynomial-time algorithm can distinguish between the following two cases: (i) the instance is a (1-epsilon)-satisfiable semi-random instance and (ii) the instances is at most delta-satisfiability (for every delta >, 0).
Abstract: In this paper, we study the average case complexity of the Unique Games problem. We propose a semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon-fraction of all edges, and finally replaces (& quot; corrupts'') the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon)-satisfiable instance, so then the problem is as hard as in the worst case. We show however that we can find a solution satisfying a (1-delta) fraction of all constraints in polynomial-time if at least one step is random (we require that the average degree of the graph is Omeg(log k)). Our result holds only for epsilon less than some absolute constant. We prove that if epsilon >= 1/2, then the problem is hard in one of the models, that is, no polynomial-time algorithm can distinguish between the following two cases: (i) the instance is a (1-epsilon)-satisfiable semi-random instance and (ii) the instance is at most delta-satisfiable (for every delta g the result assumes the 2-to-2 conjecture. Finally, we study semi-random instances of Unique Games that are at most (1-epsilon)-satisfiable. We present an algorithm that distinguishes between the case when the instance is a semi-random instance and the case when the instance is an (arbitrary) (1-delta)-satisfiable instances if epsilon >, c delta (for some absolute constant c).

52 citations


Journal ArticleDOI
TL;DR: A ranking-based black-box algorithm is presented that has a runtime of Θ(n/logn), which shows that the OneMax problem does not become harder with the additional ranking- basedness restriction.
Abstract: Randomized search heuristics such as evolutionary algorithms, simulated annealing, and ant colony optimization are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime analysis results have appeared in the last 20 years, a powerful complexity theory for such algorithms is yet to be developed. We enrich the existing notions of black-box complexity by the additional restriction that not the actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the black-box algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems. For example, the class of all binary-value functions has a black-box complexity of $O(\log n)$ in the previous black-box models, but has a ranking-based complexity of $\Theta(n)$. For the class of all OneMax functions, we present a ranking-based black-box algorithm that has a runtime of $\Theta(n / \log n)$, which shows that the OneMax problem does not become harder with the additional ranking-basedness restriction.

43 citations


Journal ArticleDOI
TL;DR: This work studies the properties of other measures that arise naturally in this framework and introduces yet more notions of resource-bounded Kolmogorov complexity, to demonstrate that other complexity measures such as branching-program size and formula size can also be discussed in terms of Kolmogsorv complexity.

42 citations


Journal ArticleDOI
TL;DR: A version of Algorithmic Information Theory based on finite transducers instead of Turing machines is developed, called finite-state complexity, which is computable and there is no a priori upper bound for the number of states used for minimal descriptions of arbitrary strings.

39 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This work provides an exposition of the basic definitions suggested by Leonid Levin, and discusses some of the considerations underlying these definitions.
Abstract: In 1984, Leonid Levin initiated a theory of average-case complexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions.

38 citations


Book ChapterDOI
14 Jun 2011
TL;DR: In this paper, the authors enrich the two existing black box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm.
Abstract: Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. A big step forward would be a useful complexity theory for such algorithms. We enrich the two existing black-box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold.

31 citations


Proceedings ArticleDOI
08 Jun 2011
TL;DR: It is shown that the assumption that DistNP is contained in AvgP does not imply that NP =RP byrelativizing techniques, and an oracle is given relative to which the assumption holds but the conclusion fails.
Abstract: Non-relativization of complexity issues can beinterpreted as showing that these issues cannot be resolvedby "black-box" techniques. We show that the assumptionDistNP is contained in AvgP does not imply that NP =RP byrelativizing techniques. More precisely, we give an oraclerelative to which the assumption holds but the conclusionfails. Moreover, relative to our oracle, there are problems inthe intersection of NP and CoNP that require exponential circuit complexity.We also give an alternate version where DistNP is contained in AvgPis true, but a problem in the second level of the polynomialhierarchy is hard on the uniform distribution.

30 citations


Proceedings ArticleDOI
20 Jul 2011
TL;DR: The Big-O notation is a method for studying the efficiency of algorithms containing mathematical functions and their comparison step by step, it illustrates the methodology of measuring the complexity facto.
Abstract: Algorithms are generally written for solving some problems or mechanism through machines, the algorithms may be several in numbers, further to these the efficiency of the produced algorithms for the said issue need to be quantified: the factors which are to be quantified are time complexity, space complexity, administrative cost and faster implementation etc.,..One of the effective methods for studying the efficiency of algorithms is Big-O notations, though the Big-O notation is containing mathematical functions and their comparison step by step, it illustrates the methodology of measuring the complexity facto. The output is always expected as a smooth line or curve with a smaller and static slope.

Book ChapterDOI
20 Sep 2011
TL;DR: The bit-complexity of the state-of-the art O(Δ) coloring algorithm is reduced without changing its time and message complexity to derive lower bounds on the time complexity for distributed algorithms as shown for the MIS and the coloring problems.
Abstract: We present tradeoffs between time complexity t, bit complexity b, and message complexity m. Two communication parties can exchange Θ(mlog(tb/m2)+b) bits of information for m < √bt and Θ(b) for m ≥ √bt. This allows to derive lower bounds on the time complexity for distributed algorithms as we demonstrate for the MIS and the coloring problems. We reduce the bit-complexity of the state-of-the art O(Δ) coloring algorithm without changing its time and message complexity. We also give techniques for several problems that require a time increase of tc (for an arbitrary constant c) to cut both bit and message complexity by Ω(log t). This improves on the traditional time-coding technique which does not allow to cut message complexity.

Journal ArticleDOI
TL;DR: A new volume-based user selection algorithm with low complexity is proposed for a multiuser multiple-input and multiple-output downlink system based on block diagonalisation precoding that does not need to perform the singular value decomposition operation and water-filling algorithm during each user selection step and hence, significantly reduces the computational time.
Abstract: A new volume-based user selection algorithm with low complexity is proposed for a multiuser multiple-input and multiple-output downlink system based on block diagonalisation precoding. The new algorithm achieving this reduced computational complexity is compared with other user selection algorithms, such as semi-orthogonal user selection (SUS) and capacity-based user selection algorithms. The proposed algorithm stems from the new volume-based user selection method that uses the product of the diagonal elements in the upper-triangular matrix obtained via Householder reduction procedure of QR factorisation to the selected users channel matrix. The computational effort of the new algorithm is reduced by one-fourth compared with SUS algorithm. Compared with the capacity-based algorithm, the proposed algorithm does not need to perform the singular value decomposition operation and water-filling algorithm during each user selection step, and hence, significantly reduces the computational time. If the maximum number of supportable users is K, the calculation results show that the capacity-based algorithm has 4K times the complexity of the proposed algorithm. Furthermore, the simulation results demonstrate that the volume-based algorithm displays better capacity performance than the SUS algorithm, and the sum-rate capacity of the volume-based algorithm is comparable with that of the capacity-based algorithm but with much less computational complexity.

Journal ArticleDOI
TL;DR: This work establishes tight bounds for the transition complexity of Boolean operations, in the case of union the upper and lower bounds differ by a multiplicative constant two, and shows that the transition simplicity results for union and complementation are very different from the state complexity results for the same operations.
Abstract: We consider the transition complexity of regular languages based on the incomplete deterministic finite automata. We establish tight bounds for the transition complexity of Boolean operations, in the case of union the upper and lower bounds differ by a multiplicative constant two. We show that the transition complexity results for union and complementation are very different from the state complexity results for the same operations. However, for intersection, the transition complexity bounds turn out to be similar to the corresponding bounds for state complexity.

Journal ArticleDOI
08 Jun 2011
TL;DR: It is shown that for an absolute constant α > 0, the worst-case success probability of any αR2(f) k-query randomized algorithm for f falls exponentially with k, which gives an essentially optimal trade-off between the query bound and the error probability.
Abstract: The "direct product problem'' is a fundamental question in complexity theory which seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T$-query algorithmhas success probability at most 1 - \eps in computing the Boolean function f on input distribution mu, then for alpha \leq 1, every alpha \eps Tk-query algorithm has success probability at most (2^{\alpha \eps}(1-\eps))^k in computing the k-fold direct product f^{\otimes k} correctly on k independent inputs from \mu. In light of examples due to Shaltiel, this statement gives an essentially optimal tradeoff between the query bound and the error probability. Using this DPT, we show that for an absolute constant $\alpha > 0$, the worst-case success probability of any $\alpha R_2(f) k$-query randomized algorithm for f^{\otimes k} falls exponentially with k. The best previous statement of this type, due to Klauck, \v{S}palek, and de Wolf, required a query bound of O(bs(f) k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f^{\otimes k}. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest.

Proceedings ArticleDOI
12 Jul 2011
TL;DR: This work gives mutation-only unbiased black-box algorithms having complexity O(n log n) for the classical JumpK test function class and for a subclass of the well-known Partition problem.
Abstract: Unbiased black-box complexity was recently introduced as a refined complexity model for randomized search heuristics (Lehre and Witt, GECCO 2010). For several problems, this notion avoids the unrealistically low complexity results given by the classical model of Droste, Jansen, and Wegener (Theor. Comput. Sci. 2006). In this work, we show that for two natural problems the unbiased black-box complexity remains artificially small. For the classical JumpK test function class and for a subclass of the well-known Partition problem, we give mutation-only unbiased black-box algorithms having complexity O(n log n). Since the first problem usually needs Theta(nk) function evaluations to be optimized by standard heuristics and the second is even NP-complete, these black-box complexities seem not to indicate the true difficulty of the two problems for randomized search heuristics.

Journal ArticleDOI
TL;DR: This paper optimize the structure of the Wei–Xiao–Chen algorithm for the linear complexity of sequences over GF(q) with period N = 2pn, where p and q are odd primes, and q is a primitive root modulo p2, and presents the minimum value k for which the k-error linear complexity is strictly less than thelinear complexity.
Abstract: In this paper, we first optimize the structure of the Wei---Xiao---Chen algorithm for the linear complexity of sequences over GF(q) with period N = 2p n , where p and q are odd primes, and q is a primitive root modulo p 2. The second, an union cost is proposed, so that an efficient algorithm for computing the k-error linear complexity of a sequence with period 2p n over GF(q) is derived, where p and q are odd primes, and q is a primitive root modulo p 2. The third, we give a validity of the proposed algorithm, and also prove that there exists an error sequence e N , where the Hamming weight of e N is not greater than k, such that the linear complexity of (s + e) N reaches the k-error linear complexity c. We also present a numerical example to illustrate the algorithm. Finally, we present the minimum value k for which the k-error linear complexity is strictly less than the linear complexity.

Journal ArticleDOI
TL;DR: An efficient algorithm which synthesizes all shortest linear-feedback shift registers generating K given sequences with possibly different lengths over a field is derived, and its correctness is proved.
Abstract: An efficient algorithm which synthesizes all shortest linear-feedback shift registers generating K given sequences with possibly different lengths over a field is derived, and its correctness is proved. The proposed algorithm generalizes the Berlekamp-Massey and Feng-Tzeng algorithms and is based on Massey's ideas. The time complexity of the algorithm is O(K?N) ? O(KN 2), where N is the length of a longest sequence and ? is the linear complexity of the sequences.

Book
30 Jun 2011
TL;DR: Error and complexity in Numerical Methods and Error-Free, Parallel, and Probabilistic Computations as mentioned in this paper was studied in error-free, parallel, and probabilistic computations.
Abstract: Chapter 1: Introduction Chapter 2: Error: Precisely What, Why, and How Chapter 3: Complexity: What, Why, and How Chapter 4: Errors and Approximations in Digital Computers Chapter 5: Error and Complexity in Numerical Methods Chapter 6: Error and Complexity in Error-Free, Parallel, and Probabilistic Computations Index

Journal ArticleDOI
TL;DR: It is shown how the same number of agents using 2 extra time units in the worst case, can solve the problit in only $\frac{7}{4} n - O(1)$ time on the average, and it is proved that the optimal average case complexity of $3}{2} ...$ is found.
Abstract: In a network environment supporting mobile entities (called robots or agents), a black hole is a harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problit is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper, we consider the black hole search problit in asynchronous ring networks of n nodes, and focus on time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n - 2) time in the worst case. The best known algorithm achieves this bound with a team of n - 1 agents with an average time cost of 2(n - 2), equal to the worst case. In this paper, we first show how the same number of agents using 2 extra time units in the worst case, can solve the problit in only $\frac{7}{4} n - O(1)$ time on the average. We then prove that the optimal average case complexity of $\frac{3}{2} ...

Journal ArticleDOI
TL;DR: An extension of the complexity space of partial functions is constructed and it is shown that it is an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs.
Abstract: The study of the dual complexity space, introduced by S. Romaguera and M.P. Schellekens [Quasi-metric properties of complexity spaces, Topol. Appl. 98 (1999), pp. 311-322], constitutes a part of the interdisciplinary research on Computer Science and Topology. The relevance of this theory is given by the fact that it allows one to apply fixed point techniques of denotational semantics to complexity analysis. Motivated by this fact and with the intention of obtaining a mixed framework valid for both disciplines, a new complexity space formed by partial functions was recently introduced and studied by S. Romaguera and O. Valero [On the structure of the space of complexity partial functions, Int. J. Comput. Math. 85 (2008), pp. 631-640]. An application of the complexity space of partial functions to model certain processes that arise, in a natural way, in symbolic computation was given in the aforementioned reference. In this paper, we enter more deeply into the relationship between semantics and complexity analysis of programs. We construct an extension of the complexity space of partial functions and show that it is, at the same time, an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs. As applications of our complexity framework, we show the correctness of the denotational specification of the factorial function and give an alternative formal proof of the asymptotic upper bound for the average case analysis of Quicksort.

Dissertation
01 Jan 2011
TL;DR: This thesis presents a refined framework that is suitable for discussing computational complexity, and the key idea is to use (a certain class of) string functions as names representing these objects, which are more expressive than infinite sequences.
Abstract: Computable analysis studies problems involving real numbers, sets and functions from the viewpoint of computability. Elements of uncountable sets (such as real numbers) are represented through approximation and processed by Turing machines. However, application of this approach to computational complexity has been limited in generality. In this thesis, we present a refined framework that is suitable for discussing computational complexity. The key idea is to use (a certain class of) string functions as names representing these objects. These are more expressive than infinite sequences, which served as names in prior work that formulated complexity in more restricted settings. An important advantage of using string functions is that we can define their size in the way inspired by higher-type complexity theory. This enables us to talk about computation on string functions whose time or space is bounded polynomially in the input size, giving rise to more general analogues of the classes P, NP, and PSPACE. We also define NP- and PSPACE-completeness under suitable many-one reductions. Because our framework separates machine computation and semantics, it can be applied to problems on sets of interest in analysis once we specify a suitable representation (encoding). As prototype applications, we consider the complexity of several problems whose inputs and outputs are real numbers, real sets, and real functions. The latter two cannot be represented succinctly using existing approaches based on infinite sequences, so ours is the first treatment of functions on them. As an interesting example, the task of numerical algorithms for solving the initial value problem of differential equations is naturally viewed as an operator taking real functions to real functions. Because there was no complexity theory for operators, previous results could only state how complex the solution can be. We now reformulate them and show that the operator itself is polynomial-space complete. We survey some of such complexity results involving real numbers and cast them in our framework.

Journal ArticleDOI
TL;DR: The recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size is discussed.
Abstract: I will discuss the recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size. The proof will be described from the perspective of someone trying to discover it.

Book
29 Jun 2011
TL;DR: This chapter discusses one-way functions, pseudo-random generators, and tail bounds in the context of quantum computation and abstract complexity theory.
Abstract: Contents Preface. 1. Preliminaries. 2. Abstract complexity theory. 3. P, NP, and E. 4. Quantum computation. 5. One-way functions, pseudo-random generators. 6. Optimization problems. A. Tail bounds. Bibliography. Index.

Proceedings ArticleDOI
03 Apr 2011
TL;DR: A low complexity non-iterative discrete bit-loading algorithm to maximize the data rate subject to specified target BER and uniform power allocation and achieves similar rates to incremental allocation, yet with much lower complexity.
Abstract: Adaptive bit-loading algorithms can improve the performance of OFDM systems significantly. The tradeoff between the algorithms performance (optimum solution) and the computational complexity is essential for implementation of loading algorithms. In this paper, we present a low complexity non-iterative discrete bit-loading algorithm to maximize the data rate subject to specified target BER and uniform power allocation. Simulation results show that the proposed algorithm outperforms the equal-BER loading and achieves similar rates to incremental allocation, yet with much lower complexity.

01 Apr 2011
TL;DR: In this paper, it was shown that the problem of finding the common bound consistent fixpoint of a set of constraints is in fact NP-complete, even when restricted to binary linear constraints.
Abstract: Bound propagation is an important Artificial Intelligence technique used in Constraint Programming tools to deal with numerical constraints It is typically embedded within a search procedure (”branch and prune”) and used at every node of the search tree to narrow down the search space, so it is critical that it be fast The procedure invokes constraint propagators until a common fixpoint is reached, but the known algorithms for this have a pseudo-polynomial worst-case time complexity: they are fast indeed when the variables have a small numerical range, but they have the well-known problem of being prohibitively slow when these ranges are large An important question is therefore whether strongly-polynomial algorithms exist that compute the common bound consistent fixpoint of a set of constraints This paper answers this question In particular we show that this fixpoint computation is in fact NP-complete, even when restricted to binary linear constraints

Book ChapterDOI
24 Oct 2011
TL;DR: It is shown that the unrestricted black-box complexity of the n-dimensional XOR- and permutation-invariant LeadingOnes function class is O(n log(n) / loglogn), which shows that the recent natural looking O(nlogn) bound is not tight.
Abstract: We show that the unrestricted black-box complexity of the n-dimensional XOR- and permutation-invariant LeadingOnes function class is O(n log(n) / loglogn). This shows that the recent natural looking O(nlogn) bound is not tight. The black-box optimization algorithm leading to this bound can be implemented in a way that only 3-ary unbiased variation operators are used. Hence our bound is also valid for the unbiased black-box complexity recently introduced by Lehre and Witt. The bound also remains valid if we impose the additional restriction that the black-box algorithm does not have access to the objective values but only to their relative order (ranking-based black-box complexity).

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A phase transition phenomenon is found in the complexity of random instances of the Traveling Salesman Problem under the 2-exchange neighbor system using the two descriptors of complexity proposed.
Abstract: This work is related to the search of complexity measures for instances of combinatorial optimization problems. Particularly, we have carried out a study about the complexity of random instances of the Traveling Salesman Problem under the 2-exchange neighbor system. We have proposed two descriptors of complexity: the proportion of the size of the basin of attraction of the global optimum over the size of the search space and the proportion of the number of different local optima over the size of the search space. We have analyzed the evolution of these descriptors as the size of the problem grows. After that, and using our complexity measures, we find a phase transition phenomenon in the complexity of the instances.

Posted Content
TL;DR: In this article, the authors provide a unified method of describing simplicity and structure, and explore the performance of an algorithm motivated by Ocam's Razor (called MCP for minimum complexity pursuit) and show that it requires O(k\log n)$ number of samples to recover a signal, where $k$ and $n$ represent its complexity and ambient dimension.
Abstract: The fast growing field of compressed sensing is founded on the fact that if a signal is 'simple' and has some 'structure', then it can be reconstructed accurately with far fewer samples than its ambient dimension. Many different plausible structures have been explored in this field, ranging from sparsity to low-rankness and to finite rate of innovation. However, there are important abstract questions that are yet to be answered. For instance, what are the general abstract meanings of 'structure' and 'simplicity'? Do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we aim to address these two questions. Using algorithmic information theory tools such as Kolmogorov complexity, we provide a unified method of describing 'simplicity' and 'structure'. We then explore the performance of an algorithm motivated by Ocam's Razor (called MCP for minimum complexity pursuit) and show that it requires $O(k\log n)$ number of samples to recover a signal, where $k$ and $n$ represent its complexity and ambient dimension, respectively. Finally, we discuss more general classes of signals and provide guarantees on the performance of MCP.

Journal ArticleDOI
TL;DR: The computational complexity of a strategy improvement algorithm by Hoffman and Karp for simple stochastic games is studied, and a bound of O(2^n/n) on the convergence time of the Hoffman-Karp algorithm, and the first non-trivial upper bounds on the converge time of these strategy improvement algorithms are proved.