Topic
Average-case complexity
About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.
Papers published on a yearly basis
Papers
More filters
•
09 Jul 2016TL;DR: This work analyzes how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal and examines functions that evaluate states with a weighted sum of state features.
Abstract: We analyze how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal. As a case study, we examine functions that evaluate states with a weighted sum of state features. We measure the complexity of a domain by the complexity of the required features. We analyze conditions under which the search algorithm runs in polynomial time and show complexity results for several classical planning domains.
17 citations
•
TL;DR: This work addresses the problem of computing a general function of several private inputs distributed among the processors of a network, while ensuring the correctness of the results and the privacy of the inputs, despite accidental or malicious faults in the system.
Abstract: We present efficient and practical algorithms for a large, distributed system of processors to achieve reliable computations in a secure manner Specifically, we address the problem of computing a general function of several private inputs distributed among the processors of a network, while ensuring the correctness of the results and the privacy of the inputs, despite accidental or malicious faults in the system
Communication is often the most significant bottleneck in distributed computing Our algorithms maintain a low cost in local processing time, are the first to achieve optimal levels of fault-tolerance, and most importantly, have low communication complexity In contrast to the best known previous methods, which require large numbers of rounds even for fairly simple computations, we devise protocols that use small messages and a constant number of rounds regardless of the complexity of the function to be computed Through direct algebraic approaches, we separate the communication complexity of secure computing from the computational complexity of the function to be computed
We examine security under both the modern approach of computational complexity-based cryptography and the classical approach of unconditional, information-theoretic security We develop a clear and concise set of definitions that support formal proofs of claims to security, addressing an important deficiency in the literature Our protocols are provably secure
In the realm of information-theoretic security, we characterize those functions which two parties can compute jointly with absolute privacy We also characterize those functions which a weak processor can compute using the aid of powerful processors without having to reveal the instances of the problem it would like to solve Our methods include a promising new technique called a locally random reduction, which has given rise not only to efficient solutions for many of the problems considered in this work but to several powerful new results in complexity theory
17 citations
••
15 Jun 1998TL;DR: In this article, it was shown that the perfect matching problem is in the complexity class SPL (in the non-uniform setting) and that the complexity of the matching problem coincides with NL in the uniform setting.
Abstract: We show that the perfect matching problem is in the complexity class SPL (in the nonuniform setting). This provides a better upper bound on the complexity of the matching problem, as well as providing motivation for studying the complexity class SPL. Using similar techniques, we show that the complexity class LogFew coincides with NL in the nonuniform setting. Finally, we provide evidence that our results also hold in the uniform setting.
17 citations
••
TL;DR: An extension of the complexity space of partial functions is constructed and it is shown that it is an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs.
Abstract: The study of the dual complexity space, introduced by S. Romaguera and M.P. Schellekens [Quasi-metric properties of complexity spaces, Topol. Appl. 98 (1999), pp. 311-322], constitutes a part of the interdisciplinary research on Computer Science and Topology. The relevance of this theory is given by the fact that it allows one to apply fixed point techniques of denotational semantics to complexity analysis. Motivated by this fact and with the intention of obtaining a mixed framework valid for both disciplines, a new complexity space formed by partial functions was recently introduced and studied by S. Romaguera and O. Valero [On the structure of the space of complexity partial functions, Int. J. Comput. Math. 85 (2008), pp. 631-640]. An application of the complexity space of partial functions to model certain processes that arise, in a natural way, in symbolic computation was given in the aforementioned reference. In this paper, we enter more deeply into the relationship between semantics and complexity analysis of programs. We construct an extension of the complexity space of partial functions and show that it is, at the same time, an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs. As applications of our complexity framework, we show the correctness of the denotational specification of the factorial function and give an alternative formal proof of the asymptotic upper bound for the average case analysis of Quicksort.
17 citations