scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2010"


Proceedings ArticleDOI
05 Jun 2010
TL;DR: New ways to simulate 2-party communication protocols to get protocols with potentially smaller communication and a direct sum theorem for randomized communication complexity are described.
Abstract: We describe new ways to simulate 2-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the inputs to the participating parties can be simulated by a new protocol involving at most ~O(√CI) bits of communication. If the protocol reveals I bits of information about the inputs to an observer that watches the communication in the protocol, we show how to carry out the simulation with ~O(I) bits of communication.These results lead to a direct sum theorem for randomized communication complexity. Ignoring polylogarithmic factors, we show that for worst case computation, computing n copies of a function requires √n times the communication required for computing one copy of the function. For average case complexity, given any distribution μ on inputs, computing n copies of the function on n inputs sampled independently according to μ requires √n times the communication for computing one copy. If μ is a product distribution, computing n copies on n independent inputs sampled according to μ requires n times the communication required for computing the function. We also study the complexity of computing the sum (or parity) of nevaluations of f, and obtain results analogous to those above.

182 citations


Book ChapterDOI
05 Sep 2010
TL;DR: A dynamic programming solution to the reconstruction problem for "indoor" Manhattan worlds (a sub-class of Manhattan worlds), which deterministically finds the global optimum and exhibits computational complexity linear in both model complexity and image size.
Abstract: A number of recent papers have investigated reconstruction under Manhattan world assumption, in which surfaces in the world are assumed to be aligned with one of three dominant directions [1,2,3,4]. In this paper we present a dynamic programming solution to the reconstruction problem for "indoor" Manhattan worlds (a sub-class of Manhattan worlds). Our algorithm deterministically finds the global optimum and exhibits computational complexity linear in both model complexity and image size. This is an important improvement over previous methods that were either approximate [3] or exponential in model complexity [4]. We present results for a new dataset containing several hundred manually annotated images, which are released in conjunction with this paper.

56 citations


Dissertation
01 Jan 2010
TL;DR: The results show that, in certain models of computation, solving k-CLIQUE in the average case requires Ω( nk/4) resources (moreover, k/4 is tight), and obtain a novel Size Hierarchy Theorem for uniform AC0.
Abstract: The computational problem of testing whether a graph contains a complete subgraph of size k is among the most fundamental problems studied in theoretical computer science. This thesis is concerned with proving lower bounds for k-CLIQUE, as this problem is known. Our results show that, in certain models of computation, solving k-CLIQUE in the average case requires Ω( nk/4) resources (moreover, k/4 is tight). Here the models of computation are bounded-depth Boolean circuits and unbounded-depth monotone circuits, the complexity measure is the number of gates, and the input distributions are random graphs with an appropriate density of edges. Such random graphs (the well-studied Erdos-Renyi random graphs) are widely believed to be a source of computationally hard instances for clique problems (as Karp suggested in 1976). Our results are the first unconditional lower bounds supporting this hypothesis. For bounded-depth Boolean circuits, our average-case hardness result significantly improves the previous worst-case lower bounds of Ω( nk/poly( d)) for depth-d circuits. In particular, our lower bound of Ω(n k/4) has no noticeable dependence on d for circuits of depth d ≤ k −2 log n/log log n, thus bypassing the previous "size-depth tradeoffs". As a consequence, we obtain a novel Size Hierarchy Theorem for uniform AC0. A related application answers a longstanding open question in finite model theory (raised by Immerman in 1982): we show that the hierarchy of bounded-variable fragments of first-order logic is strict on finite ordered graphs. Additional results of this thesis characterize the average-case descriptive complexity of k-CLIQUE through the lens of first-order logic. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

52 citations


Journal ArticleDOI
TL;DR: A novel variation of the FB algorithm - called the Efficient Forward Filtering Backward Smoothing (EFFBS) - is proposed to reduce the memory complexity without the computational overhead.

51 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work proposes a fast distributed algorithm to build all-to-one shortest paths with polynomial message complexity and time complexity, and proposes an efficient distributed algorithm for time-dependent shortest path maintenance.
Abstract: We revisit the shortest path problem in asynchronous duty-cycled wireless sensor networks, which exhibit time-dependent features. We model the time-varying link cost and distance from each node to the sink as periodic functions. We show that the time-cost function satisfies the FIFO property, which makes the time-dependent shortest path problem solvable in polynomial-time. Using the $\beta$-synchronizer, we propose a fast distributed algorithm to build all-to-one shortest paths with polynomial message complexity and time complexity. The algorithm determines the shortest paths for all discrete times with a single execution, in contrast with multiple executions needed by previous solutions. We further propose an efficient distributed algorithm for time-dependent shortest path maintenance. The proposed algorithm is loop-free with low message complexity and low space complexity of $O(maxdeg)$, where $maxdeg$ is the maximum degree for all nodes. The performance of our solution is evaluated under diverse network configurations. The results suggest that our algorithm is more efficient than previous solutions in terms of message complexity and space complexity.

50 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: A modular framework is introduced which allows to infer upper bounds on the derivational complexity of term rewrite systems by combining different criteria and it is proved that this framework is strictly more powerful than the conventional setting.
Abstract: In this paper we introduce a modular framework which allows to infer (feasible) upper bounds on the (derivational) complexity of term rewrite systems by combining different criteria. All current investigations to analyze the derivational complexity are based on a single termination proof, possibly preceded by transformations. We prove that the modular framework is strictly more powerful than the conventional setting. Furthermore, the results have been implemented and experiments show significant gains in power.

41 citations


Journal ArticleDOI
TL;DR: A remarkable relation between effective complexity and Bennett's logical depth is shown: If the effective complexity of a string x exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small.
Abstract: Effective complexity measures the information content of the regularities of an object. It has been introduced by Gell-Mann and Lloyd to avoid some of the disadvantages of Kolmogorov complexity. In this paper, we derive a precise definition of effective complexity in terms of algorithmic information theory. We analyze rigorously its basic properties such as effective simplicity of incompressible binary strings and existence of strings that have effective complexity close to their lengths. Since some of the results have appeared independently in the context of algorithmic statistics by Gacs , we discuss the relation of effective complexity to the corresponding complexity measures, in particular to Kolmogorov minimal sufficient statistics. As our main new result we show a remarkable relation between effective complexity and Bennett's logical depth: If the effective complexity of a string x exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small.

35 citations


Journal ArticleDOI
TL;DR: A fast mode decision algorithm, based on a Pareto-optimal macroblock classification scheme, is combined with a dynamic complexity control algorithm that adjusts the MB class decisions such that a constant frame rate is achieved.
Abstract: This article presents a novel real-time algorithm for reducing and dynamically controlling the computational complexity of an H.264 video encoder implemented in software. A fast mode decision algorithm, based on a Pareto-optimal macroblock classification scheme, is combined with a dynamic complexity control algorithm that adjusts the MB class decisions such that a constant frame rate is achieved. The average coding efficiency of the proposed algorithm was found to be similar to that of conventional encoding operating at half the frame rate. The proposed algorithm was found to provide lower average bitrate and distortion than static complexity scaling.

26 citations


01 Jan 2010
TL;DR: In this paper, the authors propose a 1.1.1-approximation of the 1.2-score of the 2.0-score, and the 3.5-score
Abstract: 1. 1.

22 citations


Book ChapterDOI
30 Jun 2010
TL;DR: A very simple proof of a 7n/3 - c lower bound on the circuit complexity of a large class of functions representable by high degree polynomials over GF(2) is given.
Abstract: In this note, we use lower bounds on Boolean multiplicative complexity to prove lower bounds on Boolean circuit complexity. We give a very simple proof of a 7n/3 - c lower bound on the circuit complexity of a large class of functions representable by high degree polynomials over GF(2). The key idea of the proof is a circuit complexity measure assigning different weights to XOR and AND gates.

20 citations


Book ChapterDOI
16 Jun 2010
TL;DR: It is proved an exponential lower bound on the average time of inverting Goldreich’s function by drunken [AHI05] backtracking algorithms; therefore the open question stated in [CEMT09] is resolved.
Abstract: We prove an exponential lower bound on the average time of inverting Goldreich’s function by drunken [AHI05] backtracking algorithms; therefore we resolve the open question stated in [CEMT09]. The Goldreich’s function [Gol00] has n binary inputs and n binary outputs. Every output depends on d inputs and is computed from them by the fixed predicate of arity d. Our Goldreich’s function is based on an expander graph and on the nonliniar predicates of a special type. Drunken algorithm is a backtracking algorithm that somehow chooses a variable for splitting and randomly chooses the value for the variable to be investigated at first. Our proof technique significantly simplifies the one used in [AHI05] and in [CEMT09].

Proceedings ArticleDOI
05 Jun 2010
TL;DR: A new framework for discussing computational complexity of problems involving uncountably many objects, such as real numbers, sets and functions, that can be represented only through approximation is proposed, using a certain class of string functions as names representing these objects.
Abstract: We propose a new framework for discussing computational complexity of problems involving uncountably many objects, such as real numbers, sets and functions, that can be represented only through approximation. The key idea is to use a certain class of string functions, which we call regular functions, as names representing these objects. These are more expressive than infinite sequences, which served as names in prior work that formulated complexity in more restricted settings. An important advantage of using regular functions is that we can define their size in the way inspired by higher-type complexity theory. This enables us to talk about computation on regular functions whose time or space is bounded polynomially in the input size, giving rise to more general analogues of the classes P, NP, and PSPACE. We also define NP- and PSPACE-completeness under suitable many-one reductions.Because our framework separates machine computation and semantics, it can be applied to problems on sets of interest in analysis once we specify a suitable representation (encoding). As prototype applications, we consider the complexity of functions (operators) on real numbers, real sets, and real functions. The latter two cannot be represented succinctly using existing approaches based on infinite sequences, so ours is the first treatment of them. As an interesting example, the task of numerical algorithms for solving the initial value problem of differential equations is naturally viewed as an operator taking real functions to real functions. As there was no complexity theory for operators, previous results could only state how complex the solution can be. We now reformulate them to show that the operator itself is polynomial-space complete.

Journal ArticleDOI
TL;DR: It is shown, in a more systematic study of non-uniform reductions, among other things that non- uniformity can be removed at the cost of more queries and an oracle is constructed relative to which this trade-off is optimal.
Abstract: We study properties of non-uniform reductions and related completeness notions. We strengthen several results of Hitchcock and Pavan (ICALP (1), Lecture Notes in Computer Science, vol. 4051, pp. 465–476, Springer, 2006) and give a trade-off between the amount of advice needed for a reduction and its honesty on NEXP. We construct an oracle relative to which this trade-off is optimal. We show, in a more systematic study of non-uniform reductions, among other things that non-uniformity can be removed at the cost of more queries. In line with Post’s program for complexity theory (Buhrman and Torenvliet in Bulletin of the EATCS 85, pp. 41–51, 2005) we connect such ‘uniformization’ properties to the separation of complexity classes.

Journal ArticleDOI
26 Aug 2010-PLOS ONE
TL;DR: The presented approach transforms the classic problem of assessing the complexity of an object into the realm of statistics, and may open a wider applicability of this complexity measure to diverse application areas.
Abstract: Background The evaluation of the complexity of an observed object is an old but outstanding problem. In this paper we are tying on this problem introducing a measure called statistic complexity. Methodology/Principal Findings This complexity measure is different to all other measures in the following senses. First, it is a bivariate measure that compares two objects, corresponding to pattern generating processes, on the basis of the normalized compression distance with each other. Second, it provides the quantification of an error that could have been encountered by comparing samples of finite size from the underlying processes. Hence, the statistic complexity provides a statistical quantification of the statement ‘ is similarly complex as ’. Conclusions The presented approach, ultimately, transforms the classic problem of assessing the complexity of an object into the realm of statistics. This may open a wider applicability of this complexity measure to diverse application areas.

Book ChapterDOI
19 Apr 2010
TL;DR: A lattice algorithm specifically designed for some classical applications of lattice reduction for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short, which is an improvement over the quadratic complexity floating-point LLL algorithms.
Abstract: We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.

Journal ArticleDOI
TL;DR: Some recent work on relaxed notions of derandomization that allow the deterministic simulation to err on some inputs are surveyed.
Abstract: A fundamental question in complexity theory is whether every randomized polynomial time algorithm can be simulated by a deterministic polynomial time algorithm (that is, whether BPP=P). A beautiful theory of derandomization was developed in recent years in attempt to solve this problem. In this article we survey some recent work on relaxed notions of derandomization that allow the deterministic simulation to err on some inputs. We use this opportunity to also provide a brief overview to some results and research directions in "classical derandomization".

Journal ArticleDOI
TL;DR: Improvements proposed in this work decrease the computational complexity of the A* algorithm using further information from the first phase of the algorithm for obtaining a more accurate heuristic function and finding early terminating conditions for the A*, algorithm.
Abstract: The A* algorithm is a graph search algorithm which has shown good results in terms of computational complexity for Maximum Likelihood (ML) decoding of tailbiting convolutional codes. The decoding of tailbiting codes with this algorithm is performed in two phases. In the first phase, a typical Viterbi decoding is employed to collect information regarding the trellis. The A* algorithm is then applied in the second phase, using the information obtained in the first one to calculate the heuristic function. The improvements proposed in this work decrease the computational complexity of the A* algorithm using further information from the first phase of the algorithm. This information is used for obtaining a more accurate heuristic function and finding early terminating conditions for the A* algorithm. Simulation results show that the proposed modifications decrease the complexity of ML decoding with the A* algorithm in terms of the performed number of operations.

Posted Content
15 Apr 2010
TL;DR: A large new collection of W[1]-hard problems parameterized by the tree-width of the input graph is exhibited, that is ∃[σ, ρ]-Dominating Set when σ is a set with arbitrarily large gaps between two consecutive elements and ρ is cofinite.
Abstract: The concept of generalized domination unifies well-known variants of dominationlike problems. A generalized domination (also called [σ, ρ]-Dominating Set) problem consists in finding a dominating set for which every vertex of the input graph is satisfied, given two sets of constraints σ and ρ. Very few problems are known to be W[1]-hard when restricted to graphs of bounded tree-width. We exhibit here a large new (infinite) collection of W[1]-hard problems parameterized by the tree-width of the input graph, that is ∃[σ, ρ]-Dominating Set when σ is a set with arbitrarily large gaps between two consecutive elements and ρ is cofinite (and an additional technical constraint on σ).

Book ChapterDOI
TL;DR: A measure of partitioning quality is introduced and its application in problem classification is highlighted and shows that decomposition increases the overall complexity of the problem, which can be taken as the measure’s viability indicator.
Abstract: Large scale problems need to be decomposed for tractability purposes. The decomposition process needs to be carefully managed to minimize the interdependencies between sub-problems. A measure of partitioning quality is introduced and its application in problem classification is highlighted. The measure is complexity based (real complexity) and can be employed for both disjoint and overlap decompositions. The measure shows that decomposition increases the overall complexity of the problem, which can be taken as the measure’s viability indicator. The real complexity can also indicate the decomposability of the design problem, when the complexity of the whole after decomposition is less than the complexity sum of sub-problems. As such, real complexity can specify the necessary paradigm shift from decomposition based problem solving to evolutionary and holistic problem solving.

Journal ArticleDOI
TL;DR: During and following that exciting week many people have asked me to explain the P vs. NP problem and why it is so important to computer science, and I believe that computational complexity theory sheds limited light on behavior of algorithms in the real world.
Abstract: August 7 and 8, and suddenly the whole world was paying attention. Richard Lipton's August 15 blog entry at blog@ CACM was viewed by about 10,000 readers within a week. Hundreds of computer scientists and mathematicians , in a massive Web-enabled col-laborative effort, dissected the proof in an intense attempt to verify its validity. By the time the New York Times published an article on the topic on August 16, major gaps had been identified, and the excitement was starting to subside. The P vs. NP problem withstood another challenge and remained wide open. During and following that exciting week many people have asked me to explain the problem and why it is so important to computer science. \" If everyone believes that P is different than NP, \" I was asked, \" why it is so important to prove the claim?'' The answer, of course, is that believing is not the same as knowing. The conventional \" wisdom'' can be wrong. While our intuition does tell us that finding solutions ought to be more difficult than checking solutions, which is what the P vs. NP problem is about, intuition can be a poor guide to the truth. Case in point: modern physics. While the P vs. NP quandary is a central problem in computer science, we must remember that a resolution of the problem may have limited practical impact. It is conceivable that P = NP, but the polynomial-time algorithms yielded by a proof of the equality are completely impractical, due to a very large degree of the polynomial or a very large multiplicative constant; after all, (10n) 1000 is a polynomial! Similarly, it is conceivable that P ≠ NP, but NP problems can be solved by algorithms with running time bounded by n log log log n —a bound that is not polynomial but incredibly well behaved. Even more significant, I believe, is the fact that computational complexity theory sheds limited light on behavior of algorithms in the real world. Take, for example, the Boolean Satisfi-ability Problem (SAT), which is the ca-nonical NP-complete problem. When I was a graduate student, SAT was a \" scary \" problem, not to be touched with a 10-foot pole. Garey and John-son's classical textbook showed a long sad line of programmers who have failed to solve NP-complete problems. Guess what? These programmers have been busy! The August 2009 issue of Communications contained …

Proceedings ArticleDOI
01 Sep 2010
TL;DR: A randomized scheduling algorithm that can also stabilize the system for any admissible traffic that satisfies the strong law of large number and is highly scalable and a good choice for future high-speed switch designs is proposed.
Abstract: Internet traffic has increased at a very fast pace in recent years. The traffic demand requires that future packet switching systems should be able to switch packets in a very short time, i.e., just a few nanoseconds. Algorithms with lower computation complexity are more desirable for this high-speed switching design. Among the existing algorithms that can achieve 100% throughput for input-queued switches for any admissible Bernoulli traffic, ALGO3 [1] and EMHW [2] have the lowest computation complexity, which is O(logN), where N is the number of ports in the switch. In this paper, we propose a randomized scheduling algorithm, which can also stabilize the system for any admissible traffic that satisfies the strong law of large number. The algorithm has a complexity of O(1). Since the complexity does not increase with the size of a switch, the algorithm is highly scalable and a good choice for future high-speed switch designs. We also show that the algorithm can be implemented in a distributed way by using a low-rate control channel. Simulation results show that the algorithm can provide a good delay performance as compared to algorithms with higher computation complexity.

Book ChapterDOI
18 Dec 2010
TL;DR: An algorithm is designed that achieves asymptotically optimal both worst case and average case time complexity employing an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.
Abstract: In a network environments supporting mobile entities (called robots or agents), a black hole is harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problem is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper we consider the black hole search problem in asynchronous ring networks of n nodes, and focus on the time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n-2) time in the worst case. The best algorithm achieves this bound with a team of n - 1 agents with an average time cost 2(n - 2), equal to the worst case. In this paper we first show how the same number of agents using 2 extra time units from optimal in the worst case, can solve the problem in only 7/4n- O(1) time on the average. We then prove that the optimal average case complexity 3/2n - O(1) can be achieved without increasing the worst case using 2(n- 1) agents Finally we design an algorithm that achieves asymptotically optimal both worst case and average case time complexity employing an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.

Journal ArticleDOI
TL;DR: A method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem and a new technique for analyzing computation of NP problems is proposed.
Abstract: This paper proposes a method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem. The N-vehicle problem is firstly formulated to determine the optimal replacement in the set of permutations of 1 to N. The complexity of the problem is factorial of N (input scale of problem). To balance accuracy and efficiency of general algorithms, this paper mentions a new systematic algorithm design and discusses correspondence between complexity of problem and its input condition, other than just putting forward a uniform approximation algorithm as usual. This is a new technique for analyzing computation of NP problems. The method of corresponding is then presented. We finally carry out a simulation to verify the advantages of the method: 1) to decrease computation in enumeration; 2) to efficiently obtain computational complexity for any N-vehicle case; 3) to guide an algorithm design for any N-vehicle case according to its complexity estimated by the method.

Proceedings ArticleDOI
05 Jun 2010
TL;DR: This is an errata for the STOC'06 paper, "On Basing One-Way Functions on NP-Hardness", where there is a gap in the proof of the results regarding adaptive reductions, and whether Theorem 3 holds.
Abstract: This is an errata for our STOC'06 paper, "On Basing One-Way Functions on NP-Hardness".There is a gap in the proof of our results regarding adaptive reductions, and we currently do not know whether Theorem 3 (as stated in Section 2) holds.

Book ChapterDOI
13 Dec 2010
TL;DR: It is proved that any connected graph can be transformed into a synchronized one by making suitable groups of twin vertices and it is deduced that anyconnected graph is the induced subgraph of a synchronizing graph, implying a big structural complexity of synchronizability.
Abstract: This article deals with the general ideas of almost global synchronization of Kuramoto coupled oscillators and synchronizing graphs. It reviews the main existing results and gives some new results about the complexity of the problem. It is proved that any connected graph can be transformed into a synchronized one by making suitable groups of twin vertices. As a corollary it is deduced that any connected graph is the induced subgraph of a synchronizing graph. This implies a big structural complexity of synchronizability. Finally the former is applied to find a two integer parameter family G(a,b) of connected graphs such that if b is the k-th power of 10, the synchronizability of G(a,b) is equivalent to find the k-th digit in the expansion in base 10 of the square root of 2. Thus, the complexity of classify G(a,b) is of the same order than the computation of square root of 2. This is the first result so far about the computational complexity of the synchronizability problem.

Journal ArticleDOI
TL;DR: Although this result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.
Abstract: We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < 8) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.

Journal ArticleDOI
TL;DR: This work investigates the effective complexity of binary strings generated by stationary, in general not computable, processes and shows that under not too strong conditions long typical process realizations are effectively simple.
Abstract: The concept of effective complexity of an object as the minimal description length of its regularities has been initiated by Gell-Mann and Lloyd. The regularities are modeled by means of ensembles, that is probability distributions on finite binary strings. In our previous paper we propose a definition of effective complexity in precise terms of algorithmic information theory. Here we investigate the effective complexity of binary strings generated by stationary, in general not computable, processes. We show that under not too strong conditions long typical process realizations are effectively simple. Our results become most transparent in the context of coarse effective complexity which is a modification of the original notion of effective complexity that uses less parameters in its definition. A similar modification of the related concept of sophistication has been suggested by Antunes and Fortnow.

Journal ArticleDOI
TL;DR: It is shown that all natural “natural” NP -complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional distributional problems.
Abstract: The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst case complexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial, and do not establish the hardness of the problem under “interesting” or “natural” distributions. This paper studies the possibility of proving generic hardness results (i.e., for a wide class of $${\mathcal{NP}}$$-complete problems), under “natural” distributions. Since it is not clear how to define a class of “natural” distributions for general $${\mathcal{NP}}$$-complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention of this constraint being to force the distributions to “look natural”. Levin, in his seminal paper on average case complexity from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the $${\mathcal{NP}}$$-complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional $${\mathcal{NP}}$$problems (i.e. $${\mathcal{NP}}$$with P-computable distributions). However, since then very few $${\mathcal{NP}}$$- complete problems (coupled with P-computable distributions), and in particular “natural” problems, were shown to be hard in this sense. In this paper we show that all natural $${\mathcal{NP}}$$-complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional $${\mathcal{NP}}$$.

Proceedings ArticleDOI
23 May 2010
TL;DR: In this paper, the computational complexity of optimum decoding for an orthogonal space-time block code is quantified and four equivalent techniques for optimum decoding which have the same computational complexity are specified.
Abstract: The computational complexity of optimum decoding for an orthogonal space-time block code is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples.

Book ChapterDOI
07 Jun 2010
TL;DR: It is shown that, actually, polynomials over the reals benefit from some properties that allow their safe use for complexity, and one cannot use anymore the (good) properties of the natural ordering of N employed to bound the complexity of programs.
Abstract: In the field of implicit computational complexity, we are considering in this paper the fruitful branch of interpretation methods In this area, the synthesis problem is solved by Tarski's decision procedure, and consequently interpretations are usually chosen over the reals rather than over the integers Doing so, one cannot use anymore the (good) properties of the natural (well-) ordering of N employed to bound the complexity of programs We show that, actually, polynomials over the reals benefit from some properties that allow their safe use for complexity We illustrate this by two characterizations, one of PTIME and one of PSPACE