scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2016"


Journal ArticleDOI
TL;DR: It is shown that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error.
Abstract: We use the class of commuting quantum computations known as IQP (instantaneous quantum polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error One conjecture relates to the hardness of estimating the complex-temperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials We observe that both conjectures can be shown to be valid in the setting of worst-case complexity We arrive at these conjectures by deriving spin-based generalizations of the boson sampling problem that avoid the so-called permanent anticoncentration conjecture

299 citations


Journal ArticleDOI
TL;DR: A new polynomial time algorithm to verify the decentralized diagnosability property of a discrete event system is proposed and can also be applied to the centralized case.
Abstract: In [1] , the authors claim that there is an oversight in [2] , in the sense that the proposed verifier is, in general, nondeterministic and the computational complexity analysis is incorrect. The authors in [1] also claim that the complexity of the verification algorithm presented in [3] is reduced when considering the more restrictive setting of projection masks, in contrast to the more general non-projection masks case, and equals the complexity of the verification algorithm presented in [2] . In this note, we show that the computational complexity analysis of [2] is actually correct and that the complexity of the verification algorithm presented in [3] is not reduced without additional modification of the algorithm (not yet proposed in the literature) if projection masks are used, and, therefore, is not equal to the complexity of the algorithm presented in [2] .

75 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is modest, enabling its implementation in a real-time system even when considering long prediction horizons, and a variable-speed drive system with a three-level voltage source inverter serves as an illustrative example to demonstrate the effectiveness of the proposed algorithm.
Abstract: For linear systems with integer inputs, the model predictive control problem with output reference tracking is formulated as an integer least-squares (ILS) problem. The ILS problem is solved using a modified sphere decoding algorithm, which is a particular branch-and-bound method. To reduce the computational complexity of the sphere decoder, a reduction algorithm is added as a preprocessing stage to reshape the search space in which the integer solution lies. The computational complexity of the proposed algorithm is modest, enabling its implementation in a real-time system even when considering long prediction horizons. A variable-speed drive system with a three-level voltage source inverter serves as an illustrative example to demonstrate the effectiveness of the proposed algorithm.

58 citations


Proceedings ArticleDOI
19 Jun 2016
TL;DR: An explicit example of a search problem with external information complexity ≤ O(k), withrespect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution is obtained.
Abstract: We show an exponential gap between communication complexity and external information complexity, by analyzing a communication task suggested as a candidate by Braverman. Previously, only a separation of communication complexity and internal information complexity was known. More precisely, we obtain an explicit example of a search problem with external information complexity ≤ O(k), with respect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution. In particular, this shows that a communication protocol cannot always be compressed to its external information. By a result of Braverman, our gap is the largest possible. Moreover, since the upper bound of O(k) on the external information complexity of the problem is obtained with respect to any input distribution, our result implies an exponential gap between communication complexity and information complexity (both internal and external) in the non-distributional setting of Braverman. In this setting, no gap was previously known, even for internal information complexity.

38 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: A low complexity decoding algorithm based on list sphere decoding that can reduce the computational complexity substantially while achieve the near maximum likelihood (ML) performance is proposed.
Abstract: Non-orthogonal multiple access is one of the key techniques developed for the future 5G communication systems among which, the recent proposed sparse code multiple access (SCMA) has attracted a lots of researchers' interests. By exploring the shaping gain of the multi-dimensional complex codewords, SCMA is shown to have a better performance compared with some other non-orthogonal schemes such as low density signature (LDS). However, although the sparsity of the codewords makes the near optimal message passing algorithm (MPA) possible, the decoding complexity is still very high. In this paper, we propose a low complexity decoding algorithm based on list sphere decoding. Complexity analysis and simulation results show that the proposed algorithm can reduce the computational complexity substantially while achieve the near maximum likelihood (ML) performance.

36 citations


Journal ArticleDOI
TL;DR: A combination framework for the automated polynomial complexity analysis of term rewrite systems is presented, which covers both derivational and runtime complexity analysis and is employed as theoretical foundation in the automated complexity tool?
Abstract: In this paper we present a combination framework for the automated polynomial complexity analysis of term rewrite systems. The framework covers both derivational and runtime complexity analysis, and is employed as theoretical foundation in the automated complexity tool ?. We present generalisations of powerful complexity techniques, notably a generalisation of complexity pairs and (weak) dependency pairs. Finally, we also present a novel technique, called dependency graph decomposition, that in the dependency pair setting greatly increases modularity.

21 citations


Proceedings Article
09 Jul 2016
TL;DR: This paper provides theoretical justification for exact values (or in some cases bounds) of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension), the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets.
Abstract: Learning of user preferences has become a core issue in AI research. For example, recent studies investigate learning of Conditional Preference Networks (CP-nets) from partial information. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of CP-net classes, it is helpful to calculate certain learning-theoretic information complexity parameters. This paper provides theoretical justification for exact values (or in some cases bounds) of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension, the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets. We further provide an algorithm that learns tree-structured CP-nets from membership queries. Using our results on complexity parameters, we can assess the optimality of our algorithm as well as that of another query learning algorithm for acyclic CP-nets presented in the literature.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the relationship between linear complexity and expansion complexity was studied and it was shown that for purely periodic sequences both figures of merit provide essentially the same quality test for a sufficiently long part of the sequence.
Abstract: The linear complexity is a measure for the unpredictability of a sequence over a finite field and thus for its suitability in cryptography. In 2012, Diem introduced a new figure of merit for cryptographic sequences called expansion complexity. We study the relationship between linear complexity and expansion complexity. In particular, we show that for purely periodic sequences both figures of merit provide essentially the same quality test for a sufficiently long part of the sequence. However, if we study shorter parts of the period or nonperiodic sequences, then we can show, roughly speaking, that the expansion complexity provides a stronger test. We demonstrate this by analyzing a sequence of binomial coefficients modulo $p$. Finally, we establish a probabilistic result on the behavior of the expansion complexity of random sequences over a finite field.

18 citations


Proceedings Article
09 Jul 2016
TL;DR: This work analyzes how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal and examines functions that evaluate states with a weighted sum of state features.
Abstract: We analyze how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal. As a case study, we examine functions that evaluate states with a weighted sum of state features. We measure the complexity of a domain by the complexity of the required features. We analyze conditions under which the search algorithm runs in polynomial time and show complexity results for several classical planning domains.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the time complexity of adding two n-bit numbers together within the tile self-assembly model, and show that this problem has a worst case lower bound of $$\varOmega ( \sqrt{n} )$$Ω(n) in 2D assembly, and a worst-case upper bound of O(n 3 ) in 3D assembly.
Abstract: In this paper we consider the time complexity of adding two n-bit numbers together within the tile self-assembly model. The (abstract) tile assembly model is a mathematical model of self-assembly in which system components are square tiles with different glue types assigned to tile edges. Assembly is driven by the attachment of singleton tiles to a growing seed assembly when the net force of glue attraction for a tile exceeds some fixed threshold. Within this frame work, we examine the time complexity of computing the sum of two n-bit numbers, where the input numbers are encoded in an initial seed assembly, and the output sum is encoded in the final, terminal assembly of the system. We show that this problem, along with multiplication, has a worst case lower bound of $$\varOmega ( \sqrt{n} )$$Ω(n) in 2D assembly, and $$\varOmega (\root 3 \of {n})$$Ω(n3) in 3D assembly. We further design algorithms for both 2D and 3D that meet this bound with worst case run times of $$O(\sqrt{n})$$O(n) and $$O(\root 3 \of {n})$$O(n3) respectively, which beats the previous best known upper bound of O(n). Finally, we consider average case complexity of addition over uniformly distributed n-bit strings and show how we can achieve $$O(\log n)$$O(logn) average case time with a simultaneous $$O(\sqrt{n})$$O(n) worst case run time in 2D. As additional evidence for the speed of our algorithms, we implement our algorithms, along with the simpler O(n) time algorithm, into a probabilistic run-time simulator and compare the timing results.

17 citations


Journal ArticleDOI
TL;DR: The relative discrepancy method is presented, a new rectangle-based method for proving communication complexity lower bounds for boolean functions, powerful enough to separate information complexity and communication complexity.
Abstract: We show an exponential gap between communication complexity and information complexity by giving an explicit example of a partial boolean function with information complexity ≤ O(k), and distributional communication complexity ≥ 2k. This shows that a communication protocol cannot always be compressed to its internal information. By a result of Braverman [2015], our gap is the largest possible. By a result of Braverman and Rao [2014], our example shows a gap between communication complexity and amortized communication complexity, implying that a tight direct sum result for distributional communication complexity cannot hold, answering a long-standing open problem.Another (conceptual) contribution of our work is the relative discrepancy method, a new rectangle-based method for proving communication complexity lower bounds for boolean functions, powerful enough to separate information complexity and communication complexity.

Journal ArticleDOI
TL;DR: The Local VC-Entropy-based bound improves on the original Vapnik's results because it is able to discard those functions that will not be selected during the learning phase and allows one to reduce the computational requirements that arise when dealing with the Local Rademacher Complexity in binary classification problems.

Proceedings ArticleDOI
TL;DR: Aaronson, Ben-David, and Kothari as discussed by the authors gave the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap.
Abstract: While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ~1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A low-complexity RLS algorithm, based on the dichotomous coordinate descent algorithm (DCD), is proposed, showing that in some situations the computational complexity is reduced to O(M).
Abstract: Adaptive filters for Volterra system identification must deal with two difficulties: large filter length M (resulting in high computational complexity and low convergence rate) and high correlation in the input sequence. The second problem is minimized by using the recursive least-squares algorithm (RLS), however, its large computation complexity (O(M2)) might be prohibitive in some applications. We propose here a low-complexity RLS algorithm, based on the dichotomous coordinate descent algorithm (DCD), showing that in some situations the computational complexity is reduced to O(M). The new algorithm is compared to the standard RLS, normalized least-mean squares (NLMS) and affine projections (AP) algorithms.

Journal ArticleDOI
TL;DR: In this article, the expected time complexity of the auction algorithm for bipartite matching on random graphs with edge probability p = clogNN and c > 1 is shown to be ONlog2NlogNp w.h.p.
Abstract: In this paper we analyze the expected time complexity of the auction algorithm for the matching problem on random bipartite graphs. We first prove that if for every non-maximum matching on graph G there exist an augmenting path with a length of at most 2l + 1 then the auction algorithm converges after N i¾? l iterations at most. Then, we prove that the expected time complexity of the auction algorithm for bipartite matching on random graphs with edge probability p=clogNN and c > 1 is ONlog2NlogNp w.h.p. This time complexity is equal to other augmenting path algorithms such as the HK algorithm. Furthermore, we show that the algorithm can be implemented on parallel machines with OlogN processors and shared memory with an expected time complexity of ONlogN. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 48, 384-395, 2016

Journal ArticleDOI
TL;DR: In this article, the problem of deciding the winner in counter reachability games is investigated and it is shown that in most cases it has the same complexity under all three semantics, and that under one semantics, the complexity in dimension one depends on whether the objective value is zero or any other integer.
Abstract: Counter reachability games are played by two players on a graph with labelled edges. Each move consists in picking an edge from the current location and adding its label to a counter vector. The objective is to reach a given counter value in a given location. We distinguish three semantics for counter reachability games, according to what happens when a counter value would become negative: the edge is either disabled, or enabled but the counter value becomes zero, or enabled. We consider the problem of deciding the winner in counter reachability games and show that, in most cases, it has the same complexity under all semantics. Surprisingly, under one semantics, the complexity in dimension one depends on whether the objective value is zero or any other integer.

Journal ArticleDOI
TL;DR: This paper introduces an approximation algorithm for solving a general class of multi-parametric mixed-integer linear programming (mp-MILP) problems and shows that significant reduction in computational complexity can be achieved by introducing adjustable level of suboptimality.

Proceedings ArticleDOI
05 Apr 2016
TL;DR: The reason min-sum algorithm is more prone to errors when compared to the sum-product algorithm is analyzed, and two improved algorithms are put forward which improve the performance of the min-Sum algorithm with comparable algorithmic complexity.
Abstract: Low-Density Parity check (LDPC) codes offer high-performance error correction near the Shannon limit which employs large code lengths and some iterations in the decoding process. The conventional decoding algorithm of LDPC is the Log Likelihood Ratio based Belief Propagation (LLR BP) which is also known as the ‘Sum-Product algorithm’ which gives the best decoding performance and requires the most computational complexity and implementations with increased hardware complexity. Another simpler variant of this algorithm is used which is known as ‘min-sum algorithm’ which reduces computational complexity as well as hardware complexity but with reduced accuracy. This paper analyzes the reason min-sum algorithm is more prone to errors when compared to the sum-product algorithm, and puts forward two improved algorithms which improve the performance of the min-sum algorithm with comparable algorithmic complexity.

Journal ArticleDOI
TL;DR: It is proved that the Partner Units Problem (PUP) is NP-complete for various important subclasses for variousImportant subclasses of the PUP.

Journal ArticleDOI
TL;DR: It is shown that in the model of zero-error communication complexity, direct sum fails for average communication complexity as well as for external information complexity.
Abstract: We show that in the model of zero-error communication complexity, direct sum fails for average communication complexity as well as for external information complexity. Our example also refutes a version of a conjecture by Braverman et al. that in the zero-error case amortized communication complexity equals external information complexity. In our examples the underlying distributions do not have full support. One interpretation of a distribution of non full support is as a promise given to the players (the players have a guarantee on their inputs). This brings up the issue of promise versus non-promise problems in this context.

Proceedings ArticleDOI
22 May 2016
TL;DR: Some modifications to the l0-IPAPA algorithm are proposed in order to decrease its computational complexity while preserving its good convergence properties, and the inclusion of a data-selection mechanism provides promising results.
Abstract: There are two main families of algorithms that tackle the problem of sparse system identification: the proportionate family and the one that employs sparsity-promoting penalty functions. Recently, a new approach was proposed with the l0-IPAPA algorithm, which combines proportionate updates with sparsity-promoting penalties. This paper proposes some modifications to the l0-IPAPA algorithm in order to decrease its computational complexity while preserving its good convergence properties. Among these modifications, the inclusion of a data-selection mechanism provides promising results. Some enlightening simulation results are provided in order to verify and compare the performance of the proposed algorithms.

Journal ArticleDOI
TL;DR: This paper has proposed an enhancing IKSD algorithm by adding the combining of column norm ordering (channel ordering) with Manhattan metric to enhance the performance and reduce the computational complexity.
Abstract: The main challenge in MIMO systems is how to design the MIMO detection algorithms with lowest computational complexity and high performance that capable of accurately detecting the transmitted signals. In last valuable research results, it had been proved the Maximum Likelihood Detection (MLD) as the optimum one, but this algorithm has an exponential complexity especially with increasing of a number of transmit antennas and constellation size making it an impractical for implementation. However, there are alternative algorithms such as the K-best sphere detection (KSD) and Improved K-best sphere detection (IKSD) which can achieve a close to Maximum Likelihood (ML) performance and less computational complexity. In this paper, we have proposed an enhancing IKSD algorithm by adding the combining of column norm ordering (channel ordering) with Manhattan metric to enhance the performance and reduce the computational complexity. The simulation results show us that the channel ordering approach enhances the performance and reduces the complexity, and Manhattan metric alone can reduce the complexity. Therefore, the combined channel ordering approach with Manhattan metric enhances the performance and much reduces the complexity more than if we used the channel ordering approach alone. So our proposed algorithm can be considered a feasible complexity reduction scheme and suitable for practical implementation.

Proceedings ArticleDOI
30 Mar 2016
TL;DR: Simulation results show that the ill-RCKB provides significant complexity reduction without compromising the performance; this is achieved by discarding irrelevant nodes that have distance metrics greater than a pruned radius value, which depends on the channel condition number.
Abstract: The traditional K-best sphere decoder retains the best K-nodes at each level of the search tree; these K-nodes, include irrelevant nodes which increase the complexity without improving the performance. A variant of the K-best sphere decoding algorithm for ill-conditioned MIMO channels is proposed, namely, the ill-conditioned reduced complexity K-best algorithm (ill-RCKB). The ill-RCKB provides lower complexity than the traditional K-best algorithm without sacrificing its performance; this is achieved by discarding irrelevant nodes that have distance metrics greater than a pruned radius value, which depends on the channel condition number. A hybrid-RCKB decoder is also proposed in order to balance the performance and complexity in both well and ill-conditioned channels. Complexity analysis for the proposed algorithms is provided as well. Simulation results show that the ill-RCKB provides significant complexity reduction without compromising the performance.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: In this paper, the authors studied the computational complexity of constraint satisfaction problems that are based on integer expressions and algebraic circuits and showed that the complexity varies over a wide range of complexity classes such as L, P, NP, PSPACE, NEXP, and even Sigma_1, the class of c.e.
Abstract: We study the computational complexity of constraint satisfaction problems that are based on integer expressions and algebraic circuits. On input of a finite set of variables and a finite set of constraints the question is whether the variables can be mapped onto finite subsets of N (resp., finite intervals over N) such that all constraints are satisfied. According to the operations allowed in the constraints, the complexity varies over a wide range of complexity classes such as L, P, NP, PSPACE, NEXP, and even Sigma_1, the class of c.e. languages.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: In this article, the authors extend the hierarchy results of Rossman, Servedio and Tan [1] to address circuits of almost logarithmic depth and obtain a stronger result by a significantly shorter proof.
Abstract: We extend the recent hierarchy results of Rossman, Servedio and Tan [1] to address circuits of almost logarithmic depth. Our proof uses the same basic approach as [1] but a number of small differences enables us to obtain a stronger result by a significantly shorter proof.

Proceedings ArticleDOI
20 Mar 2016
TL;DR: The proposed algorithm generalizes the Orthonormal Projection Approximation Subspace Tracking approach for tracking a class of third-order tensors which have one dimension growing with time and has linear complexity, good convergence rate and good estimation accuracy.
Abstract: We present a fast adaptive PARAFAC decomposition algorithm with low computational complexity. The proposed algorithm generalizes the Orthonormal Projection Approximation Subspace Tracking (OPAST) approach for tracking a class of third-order tensors which have one dimension growing with time. It has linear complexity, good convergence rate and good estimation accuracy. To deal with large-scale problems, a parallel implementation can be applied to reduce both computational complexity and storage. We illustrate the effectiveness of our algorithm in comparison with the state-of-the-art algorithms through simulation experiments.

Posted Content
18 Jul 2016
TL;DR: This work provides evidence that the running times of known pseudo-polynomial time algorithms solving IP, when the number of constraints is a constant and the branch-width of the corresponding column-matroid is a constants, are probably optimal.
Abstract: We use the Exponential Time and Strong Exponential Time hypotheses (ETH & SETH) to provide conditional lower bounds on the solvability of the integer programming (IP) problem. We provide evidence that the running times of known pseudo-polynomial time algorithms solving IP, when the number of constraints is a constant [Papadimitriou, J. ACM 1981] and when the branch-width of the corresponding column-matroid is a constant [Cunningham and Geelen, IPCO 2007], are probably optimal. ∗Department of Informatics, University of Bergen, Norway. {fomin|fahad.panolan}@ii.uib.no †Technische Universität Wien, Vienna, Austria. ramanujan@ac.tuwien.ac.at ‡The Institute of Mathematical Sciences, Chennai, India. saket@imsc.res.in ar X iv :1 60 7. 05 34 2v 1 [ cs .D S] 1 8 Ju l 2 01 6

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A methodology based on system connections to calculate its complexity, modeled using the theory of discrete event systems and simulated in different contexts in order to measure their complexities is proposed.
Abstract: This paper proposes a methodology based on system connections to calculate its complexity. There are proposed two study cases: the dining Chinese philosophers problem and the distribution center. Both studies are modeled using the theory of discrete event systems and simulated in different contexts in order to measure their complexities. The obtained results present i) the static complexity as a limiting factor for the dynamic complexity and ii) the lowest cost in terms of complexity for each unit of measure of the system performance. The associated complexity and performance measures aggregate knowledge about the system.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This paper proposes the method of the multifractal division of the computational complexity classes, which is formalized by introducing the special equivalence relations on these classes by exposing the self-similarity properties of the complexity classes structure.
Abstract: This paper proposes the method of the multifractal division of the computational complexity classes, which is formalized by introducing the special equivalence relations on these classes. Exposing the self-similarity properties of the complexity classes structure, this method allows performing the accurate classification of the problems and demonstrates the capability of adaptation to the new advances in the computational complexity theory.

Journal ArticleDOI
TL;DR: It is proved that it is an NP-complete problem to decide whether a given simple game is stable, or not.
Abstract: This paper considers the computational complexity of the design of voting rules, which is formulated by simple games. We prove that it is an NP-complete problem to decide whether a given simple game is stable, or not.