scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2020"


Book
01 Jan 2020
TL;DR: This work has shown that data Complexity, Margin-Based Learning, and Popper's Philosophy of Inductive Learning are important factors in the development of evolutionary learning.
Abstract: Theory and Methodology.- Measures of Geometrical Complexity in Classification Problems.- Object Representation, Sample Size, and Data Set Complexity.- Measures of Data and Classifier Complexity and the Training Sample Size.- Linear Separability in Descent Procedures for Linear Classifiers.- Data Complexity, Margin-Based Learning, and Popper's Philosophy of Inductive Learning.- Data Complexity and Evolutionary Learning.- Classifier Domains of Competence in Data Complexity Space.- Data Complexity Issues in Grammatical Inference.- Applications.- Simple Statistics for Complex Feature Spaces.- Polynomial Time Complexity Graph Distance Computation for Web Content Mining.- Data Complexity in Clustering Analysis of Gene Microarray Expression Profiles.- Complexity of Magnetic Resonance Spectrum Classification.- Data Complexity in Tropical Cyclone Positioning and Classification.- Human-Computer Interaction for Complex Pattern Recognition Problems.- Complex Image Recognition and Web Security.

164 citations


Posted Content
TL;DR: It is proved that the $K_{a,b}$ counting problem admits an $n^{a+o(1)}$-time algorithm if $a\geq 8$, while any-n-a-\epsilon algorithm fails to solve it even on random bipartite graph for any constant $\epsilons>0$ under the Strong Exponential Time Hypotheis.
Abstract: In this paper, we seek a natural problem and a natural distribution of instances such that any $O(n^{c-\epsilon})$-time algorithm fails to solve most instances drawn from the distribution, while the problem admits an $n^{c+o(1)}$-time algorithm that correctly solves all instances. Specifically, we consider the $K_{a,b}$ counting problem in a random bipartite graph, where $K_{a,b}$ is a complete bipartite graph for constants $a$ and $b$. We proved that the $K_{a,b}$ counting problem admits an $n^{a+o(1)}$-time algorithm if $a\geq 8$, while any $n^{a-\epsilon}$-time algorithm fails to solve it even on random bipartite graph for any constant $\epsilon>0$ under the Strong Exponential Time Hypotheis. Then, we amplify the hardness of this problem using the direct product theorem and Yao's XOR lemma by presenting a general framework of hardness amplification in the setting of fine-grained complexity.

15 citations


01 Jan 2020
TL;DR: The equivalence provides fundamentally new proof techniques for analyzing average-case complexity through the lens of meta-complexity of time-bounded Kolmogorov complexity and resolves, as immediate corollaries, questions of equivalence among different notions of average- case complexity of PH.
Abstract: We exactly characterize the average-case complexity of the polynomial-time hierarchy (PH) by the worst-case (meta-)complexity of GapMINKTPH, i.e., an approximation version of the problem of determining if a given string can be compressed to a short PH-oracle efficient program. Specifically, we establish the following equivalence: \begin{align*} &\text{DistPH}\subseteq \text{AvgP}\ \ (\mathrm{i}.\mathrm{e}.,\ \text{PH}\ \mathrm{is\ easy\ on\ average})\\ \Longleftrightarrow &\text{GapMINKT}^{\text{PH}}\in \mathrm{P}. \end{align*} In fact, our equivalence is significantly broad: A number of statements on several fundamental notions of complexity theory, such as errorless and one-sided-error average-case complexity, sublinear-time-bounded and polynomial-time-bounded Kolmogorov complexity, and PH-computable hitting set generators, are all shown to be equivalent. Our equivalence provides fundamentally new proof techniques for analyzing average-case complexity through the lens of meta-complexity of time-bounded Kolmogorov complexity and resolves, as immediate corollaries, questions of equivalence among different notions of average-case complexity of PH: low success versus high success probabilities (i.e., a hardness amplification theorem for DistPH against uniform algorithms) and errorless versus one-sided-error average-case complexity of PH. Our results are based on a sequence of new technical results that further develops the proof techniques of the author's previous work on the non-black-box worst-case to average-case reduction and unexpected hardness results for Kolmogorov complexity (FOCS'18, CCC'20, ITCS'20, STOC'20). Among other things, we prove the following. 1) $\text{GapMINKT}^{\text{NP}}\in \mathrm{P}$ implies $\mathrm{P}=\text{BPP}$ . At the core of the proof is a new black-box hitting set generator construction whose reconstruction algorithm uses few random bits, which also improves the approximation quality of the non-black-box worst-case to average-case reduction without using a pseudorandom generator. 2) $\text{GapMINKT}^{\text{PH}}\in \mathrm{P}$ implies $\text{DistPH}\subseteq \text{AvgBPP}=\text{AvgP}$ . 3)If MINKTPH is easy on a $1/\text{poly}(n)$ -fraction of inputs, then $\text{GapMINKT}^{\text{PH}}\in \mathrm{P}$ . This improves the error tolerance of the previous non-black-box worst-case to average-case reduction. The full version of this paper is available on ECCC.

15 citations


Journal ArticleDOI
TL;DR: This work presents a complete description of the sparse interpolation used by Monagan and Tuncer and shows that it runs in random polynomial time, and studies what happens to the sparsity of multivariate polynomials when the variables are successively evaluated at numbers.

8 citations


Journal ArticleDOI
TL;DR: Despite the excellent performance of Merge sort algorithm, the need for an auxiliary memory for sorting makes it less preferable than Quick sort algorithm for applications where a good cache locality is of paramount importance.

4 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: In this article, the average-case complexity of the polynomial-time hierarchy (PH) is characterized by the worst-case meta-complexity of GapMINKTPH, i.e., an approximation version of determining if a given string can be compressed to a short PH-oracle efficient program.
Abstract: We exactly characterize the average-case complexity of the polynomial-time hierarchy (PH) by the worst-case (meta-)complexity of GapMINKTPH, i.e., an approximation version of the problem of determining if a given string can be compressed to a short PH-oracle efficient program. Specifically, we establish the following equivalence: \begin{align*} &\text{DistPH}\subseteq \text{AvgP}\ \ (\mathrm{i}.\mathrm{e}.,\ \text{PH}\ \mathrm{is\ easy\ on\ average})\\ \Longleftrightarrow &\text{GapMINKT}^{\text{PH}}\in \mathrm{P}. \end{align*} In fact, our equivalence is significantly broad: A number of statements on several fundamental notions of complexity theory, such as errorless and one-sided-error average-case complexity, sublinear-time-bounded and polynomial-time-bounded Kolmogorov complexity, and PH-computable hitting set generators, are all shown to be equivalent. Our equivalence provides fundamentally new proof techniques for analyzing average-case complexity through the lens of meta-complexity of time-bounded Kolmogorov complexity and resolves, as immediate corollaries, questions of equivalence among different notions of average-case complexity of PH: low success versus high success probabilities (i.e., a hardness amplification theorem for DistPH against uniform algorithms) and errorless versus one-sided-error average-case complexity of PH. Our results are based on a sequence of new technical results that further develops the proof techniques of the author's previous work on the non-black-box worst-case to average-case reduction and unexpected hardness results for Kolmogorov complexity (FOCS'18, CCC'20, ITCS'20, STOC'20). Among other things, we prove the following. 1) $\text{GapMINKT}^{\text{NP}}\in \mathrm{P}$ implies $\mathrm{P}=\text{BPP}$ . At the core of the proof is a new black-box hitting set generator construction whose reconstruction algorithm uses few random bits, which also improves the approximation quality of the non-black-box worst-case to average-case reduction without using a pseudorandom generator. 2) $\text{GapMINKT}^{\text{PH}}\in \mathrm{P}$ implies $\text{DistPH}\subseteq \text{AvgBPP}=\text{AvgP}$ . 3)If MINKTPH is easy on a $1/\text{poly}(n)$ -fraction of inputs, then $\text{GapMINKT}^{\text{PH}}\in \mathrm{P}$ . This improves the error tolerance of the previous non-black-box worst-case to average-case reduction. The full version of this paper is available on ECCC.

2 citations


Journal ArticleDOI
TL;DR: It is shown that unless a complexity theoretic hypothesis fails, some NP-complete problems cannot have a polynomial-time errorless heuristic algorithm with any vanishing failure rate, and it is proved that vanishing, even exponentially small, failure rate is achievable.

1 citations


Posted Content
TL;DR: The average teaching complexity of the task of locating a target region among those induced by intersections of n halfspaces in R^d is investigated, showing that the average-case teaching complexity is Θ(d), which is in sharp contrast to the worst-case learning complexity of Θ (n).
Abstract: We examine the task of locating a target region among those induced by intersections of n halfspaces in R^d This generic task connects to fundamental machine learning problems, such as training a perceptron and learning a ϕ-separable dichotomy We investigate the average teaching complexity of the task, ie, the minimal number of samples (halfspace queries) required by a teacher to help a version-space learner in locating a randomly selected target As our main result, we show that the average-case teaching complexity is Θ(d), which is in sharp contrast to the worst-case teaching complexity of Θ(n) If instead, we consider the average-case learning complexity, the bounds have a dependency on n as Θ(n) for iid queries and Θ(dlog(n)) for actively chosen queries by the learner Our proof techniques are based on novel insights from computational geometry, which allow us to count the number of convex polytopes and faces in a Euclidean space depending on the arrangement of halfspaces Our insights allow us to establish a tight bound on the average-case complexity for ϕ-separable dichotomies, which generalizes the known O(d) bound on the average number of "extreme patterns" in the classical computational geometry literature (Cover, 1965)

1 citations


Posted Content
TL;DR: In this paper, the behavior of the Euclidean algorithm applied to pairs (g,f) of univariate nonconstant polynomials over a finite field F_q of q elements when the highest degree polynomial g is fixed is analyzed.
Abstract: We analyze the behavior of the Euclidean algorithm applied to pairs (g,f) of univariate nonconstant polynomials over a finite field F_q of q elements when the highest-degree polynomial g is fixed. Considering all the elements f of fixed degree, we establish asymptotically optimal bounds in terms of q for the number of elements f which are relatively prime with g and for the average degree of gcd(g,f). The accuracy of our estimates is confirmed by practical experiments. We also exhibit asymptotically optimal bounds for the average-case complexity of the Euclidean algorithm applied to pairs (g,f) as above.

DOI
01 Jun 2020
TL;DR: A negative result is provided, by showing that, under certain assumptions, for almost every term, the higher-order model checking problem specialized for the term is k-EXPTIME hard with respect to the size of automata.
Abstract: We study a mixture between the average case and worst case complexities of higher-order model checking, the problem of deciding whether the tree generated by a given λ Y-term (or equivalently, a higher-order recursion scheme) satisfies the property expressed by a given tree automaton Higher-order model checking has recently been studied extensively in the context of higher-order program verification Although the worst-case complexity of the problem is k-EXPTIME complete for order-k terms, various higher-order model checkers have been developed that run efficiently for typical inputs, and program verification tools have been constructed on top of them One may, therefore, hope that higher-order model checking can be solved efficiently in the average case, despite the worst-case complexity We provide a negative result, by showing that, under certain assumptions, for almost every term, the higher-order model checking problem specialized for the term is k-EXPTIME hard with respect to the size of automata The proof is based on a novel intersection type system that characterizes terms that do not contain any useless subterms