scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1998"


Book
10 Dec 1998
TL;DR: This book explains the development of information-based complexity and some interesting topics include very high-dimensional integration and mathematical finance, and the curse of dimensionality.
Abstract: Part I. Fundamentals: 1. Introduction 2. Information-based complexity 3. Breaking the curse of dimensionality Part II. Some Interesting Topics: 4. Very high-dimensional integration and mathematical finance 5. Complexity of path integration 6. Are ill-posed problems solvable? 7. Complexity of nonlinear problems 8. What model of computation should be used by scientists? 9. Do impossibility theorems from formal models limit scientific knowledge? 10. Complexity of linear programming 11. Complexity of verification 12. Complexity of implementation testing 13. Noisy information 14. Value of information in computation 15. Assigning values to mathematical hypotheses 16. Open problems 17. A brief history of information-based complexity Part III. References: 18. A guide to the literature Bibliography Subject index Author index.

277 citations


Journal ArticleDOI
TL;DR: It is shown that the complexity of an infinite string contained in a Σ2 -definable set of strings is upper bounded by the Hausdorff dimension of this set and that this upper bound is tight.
Abstract: This paper links the concepts of Kolmogorov complexity (in complexity theory) and Hausdorff dimension (in fractal geometry) for a class of recursive (computable) ω -languages.

99 citations


Book
01 Dec 1998

85 citations


Journal ArticleDOI
TL;DR: In this article, the authors define computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system, and classify them into the computational complexity classes Pd, Co-RPd, NPd and EXP,t.

82 citations


Proceedings ArticleDOI
04 May 1998
TL;DR: A rigorous complexity analysis of the (1+1) evolutionary algorithm for linear functions with Boolean inputs is given and it is found that the expected run time of this algorithm is at most /spl Theta/(n ln n) forlinear functions with n variables.
Abstract: Evolutionary algorithms (EAs) are heuristic randomized algorithms which, by many impressive experiments, have been proven to behave quite well for optimization problems of various kinds. In this paper, a rigorous complexity analysis of the (1+1) evolutionary algorithm for linear functions with Boolean inputs is given. The analysis is carried out for different mutation rates. The main contribution of the paper is not the result that the expected run time of the (1+1) evolutionary algorithm is at most /spl Theta/(n ln n) for linear functions with n variables, but the presentation of methods showing how this result can be proven rigorously.

64 citations


Journal ArticleDOI
TL;DR: It is concluded that algorithms optimized for the average-case are not only much simpler to implement, but have moderate storage requirement and can even run faster for the majority of problems.
Abstract: This paper examines worst-case and average-case complexity measures of ray-shooting algorithms in order to find the answer to the question why computer graphics practitioners prefer heuristic methods to extensively studied worst-case optimal algorithms It demonstrates that ray-shooting requires at least logarithmic time in the worst-case and discusses the strategies how to design such worst-case optimal algorithms It also examines the lower-bounds of storage complexity of logarithmic-time algorithms and concludes that logarithmic time has very high price in terms of required storage In order to find average-case measures, a probabilistic model of the scene is established We conclude that algorithms optimized for the average-case are not only much simpler to implement, but have moderate storage requirement and can even run faster for the majority of problems

38 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: A new bound is proved on the sum of the Betti numbers of one connected component of a basic semi-algebraic set which is an improvement over the Oleinik—Petrovsky—Thom—Milnor bound and implies that the topological complexity of a single cell is bounded by O(nk-1) .
Abstract: The problem of bounding the combinatorial complexity of a single connected component (a single cell) of the complement of a set of a geometric objects in R/sup k/, each object of constant description complexity, is an important problem in computational geometry which has attracted much attention over the past decade. It has been conjectured that the combinatorial complexity of a single cell is bounded by a function much closer to O(n/sup k-1/) rather than O(n/sup k/) which is the bound for the combinatorial complexity of the whole arrangement. Till now, this was known to be rule only for k/spl les/3 and only for some special cases in higher dimensions. A classic result in real algebraic geometry due to Oleinik-Petrovsky, Thom and Milnor, bounds the topological complexity (the sum of the Betti numbers) of basic semi-algebraic sets. However, till now no better bounds were known if we restricted attention to a single connected component of a basic semi-algebraic set. In this paper, we show how these two problems are related. We prove a new bound on the sum of the Betti numbers of one connected component of a basic semi-algebraic set which is an improvement over the Oleinik-Petrovsky-Thom-Milnor bound. This also implies that the topological complexity of a single cell, measured by the sum of the Betti numbers, is bounded by O(n/sup k-1/).

35 citations


Journal ArticleDOI
TL;DR: A first-order and a high-order algorithm for solving linear complementarity problems, both of which are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems, are studied.
Abstract: In this paper we study a first-order and a high-order algorithm for solving linear complementarity problems These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems The complexity of these algorithms depends on the size of the neighborhood For the first-order algorithm, we achieve the complexity bound which the typical large-step algorithms possess It is well known that the complexity of large-step algorithms is greater than that of short-step ones By using high-order power series (hence the name high-order algorithm), the iteration complexity can be reduced We show that the complexity upper bound for our high-order algorithms is equal to that for short-step algorithms

23 citations


Book
01 Dec 1998
Abstract: The study of sparse hard sets and sparse complete sets has been a central research area in complexity theory for nearly two decades. Recently new results using unexpected techniques have been obtained. They provide new and easier proofs of old theorems, proofs of new theorems that unify previously known results, resolutions of old conjectures, and connections to the fascinating world of randomization and derandom-ization. In this article we give an exposition of this vibrant research area.

20 citations


Journal ArticleDOI
TL;DR: In the course of the proof, this work relates statistical knowledge complexity to perfect knowledge complexity; specifically, it is shown that, for the honest verifier, these hierarchies coincide up to a logarithmic additive term.
Abstract: We study the computational complexity of languages which have interactive proofs of logarithmic knowledge complexity. We show that all such languages can be recognized in ${\cal BPP}^{\cal NP}$. Prior to this work, for languages with greater-than-zero knowledge complexity only trivial computational complexity bounds were known. In the course of our proof, we relate statistical knowledge complexity to perfect knowledge complexity; specifically, we show that, for the honest verifier, these hierarchies coincide up to a logarithmic additive term.

18 citations


Proceedings ArticleDOI
15 Jun 1998
TL;DR: In this article, it was shown that the perfect matching problem is in the complexity class SPL (in the non-uniform setting) and that the complexity of the matching problem coincides with NL in the uniform setting.
Abstract: We show that the perfect matching problem is in the complexity class SPL (in the nonuniform setting). This provides a better upper bound on the complexity of the matching problem, as well as providing motivation for studying the complexity class SPL. Using similar techniques, we show that the complexity class LogFew coincides with NL in the nonuniform setting. Finally, we provide evidence that our results also hold in the uniform setting.

Journal ArticleDOI
TL;DR: This work discusses implementations of the Adaptive Resonance Theory on a serial machine and suggests that it is possible to formulate ART in a non-recursive algorithm such that the complexity is of order O(MN) only.
Abstract: We discuss implementations of the Adaptive Resonance Theory (ART) on a serial machine. The standard formulation of ART, which was inspired by recurrent brain structures, corresponds to a recursive algorithm. This induces an algorithmic complexity of order O(N^2)+O(MN) in worst and average case, N being the number of categories, and M the input dimension. It is possible, however, to formulate ART in a non-recursive algorithm such that the complexity is of order O(MN) only.

Journal ArticleDOI
TL;DR: The motivation for the use of interval computations in data processing and the basic problems of interval mathematics are explained.
Abstract: Before we start explaining why we need to go beyond interval computations, let us briefly recall our motivation for the use of interval computations in data processing. Traditional data processing methods of numerical mathematics are based on the assumptions that we know the exact values of the input quantities. In reafity, the data come from measurements, and measurements are never 100% precise; hence, the actual value x of each input quantity may differ from its measurement result ~. In some cases, we know the probabilities of different values of error Ax = ~ x, but in most case, we only know the guaranteed upper bound for the error; in these cases, the only information we have about the (unknown) actual value x is that x belongs to the interval x = [~ ~x, ~ + ~x]. One of the basic problems of interval mathematics is, therefore, as follows: given a data processing algorithm f ( x l , . . . , xn) and n intervals xl . . . . , x~, compute the range y of possible values of y = f ( x l . . . . , x , ) when xi 6 xi.

Journal ArticleDOI
TL;DR: It is shown that the linear complexity for one-symbol substitution of any periodic sequence over GF(q) can be computed without any condition on the minimal polynomial of the sequence.
Abstract: It is shown that the linear complexity for one-symbol substitution of any periodic sequence over GF(q) can be computed without any condition on the minimal polynomial of the sequence.

Proceedings ArticleDOI
31 May 1998
TL;DR: In this paper, a low-complexity FIR filter design with integer coefficients using primitive operator directed graphs (PODG) is presented. Butler et al. used genetic algorithms (GAs) in conjunction with a heuristic graph design algorithm to provide a solution set which represents different compromises between performance, complexity and filter order.
Abstract: This paper considers the design of low complexity FIR filters. Complexity is reduced by constraining the filters to have integer coefficients, which can be efficiently implemented using primitive operator directed graphs (PODG). Genetic algorithms (GAs) are used in conjunction with a heuristic graph design algorithm, to provide a solution set which represents different compromises between performance, complexity and filter order. Example results are presented for both one and two dimensional filters, and are shown to provide both superior performance and complexity, compared to previous methods. The main benefits result from the use of a joint optimization, rather than a separable 2-stage approach. The use of a PODG representation is shown to provide significant improvements over a canonic signed digit (CSD) or signed power-of-two (SPT) representation.


Dissertation
01 Jan 1998
TL;DR: This thesis study studies the problem of computing a generator set of an unknown group, given a membership testing oracle for the group, and examines the close relation of this problem with concept learning in the framework of learning theory.
Abstract: Group theory, is a record of bonaade research work done during 1993-1998 under my supervision. The research work presented in this thesis has not formed the basis for the award to the candidate of any Degree, Diploma, Associateship, Fellowship or other similar titles. It is further certiied that the thesis represents independent work by the candidate and collaboration when existed was necessitated by the nature and scope of problems dealt with. Abstract The study of counting complexity classes has been a very fruitful and promising area in complexity theory. This study has given important insights into the inherent complexity of many natural computational problems. Problems arising from group theory have been studied by many researchers. These problems are interesting from the complexity-theoretic viewpoint since the complexity status of many of these problems is not settled. In this dissertation, we study some problems from group theory in the context of counting complexity. More speciically, we place some basic computational group-theoretic problems in counting classes of low complexity. These results help in giving further insights into the intriguing nature of the complexity of these problems. This thesis consists of two parts. In Chapter 4, which comprises the rst part, we study the complexity of three basic computational group-theoretic problems over black-box groups. The problems are Membership Testing, Order Veriication and Isomorphism Testing. These are computational problems for which no polynomial-time algorithms exist. It was shown that over general black-box groups, Membership Testing is in NP \ co-AM, Order Veriication is in AM \ co-AM, and Isomorphism Testing is in AM BS84, Bab92]. We show that these problems, over solvable black-box groups, are in the counting class SPP. The proof of this result is built on a constructive version of the fundamental theorem of nite abelian groups. The class SPP is known to be low for the counting classes PP, C = P and Mod k P for k 2 FFK94]. Since it is unlikely that the class NP is contained in SPP, these upper bounds give evidence that these problems are unlikely to be hard for NP. In the second part of the thesis we study the problem of computing a generator set of an unknown group, given a membership testing oracle for the group. Because of the close relation of this problem with concept learning, we study this problem in the framework of learning theory. In Chapter 5, for analyzing the …

Proceedings ArticleDOI
07 Sep 1998
TL;DR: The maximum order complexity determines the shortest feedback shift register which can generate a given sequence utilising a memoryless, possibly non-linear, feedback function and is a potentially useful measure of the randomness of a sequence.
Abstract: The maximum order complexity determines the shortest feedback shift register which can generate a given sequence utilising a memoryless, possibly non-linear, feedback function. The maximum order complexity of a sequence is a potentially useful measure of the randomness of 8 sequence. In this paper a statistical test based on the maximum order complexity is proposed. The proposed test requires that the distribution of the maximum order complexity of a random sequence of arbitrary length is known. Erdmann and Murphy (1997) derived an expression which approximates the distribution of the maximum order complexity. Evaluating this expression is computationally expensive and an alternative approximation to the distribution of the maximum order complexity is proposed. The alternative approximation is then used to construct a computationally efficient statistical test which may be used to evaluate the randomness of a sequence. The proposed test is specifically concerned with binary sequences and the distribution of the maximum order complexity of binary sequences.

Book ChapterDOI
24 Aug 1998
TL;DR: This paper investigates in terms of Kolmogorov complexity the differences between the information necessary to compute a recursive function and the information contained in its graph.
Abstract: This paper investigates in terms of Kolmogorov complexity the differences between the information necessary to compute a recursive function and the information contained in its graph. Our first result is that the complexity of the initial parts of the graph of a recursive function, although bounded, has almost never a limit. The second result is that the complexity of these initial parts approximate the complexity of the function itself in most cases (and in the average) but not always.


Journal ArticleDOI
TL;DR: For every e>0 approximation algorithms with linear running time O(n log (1/e)) that deliver feasible schedules whose makespan is at most 1+e times the optimum makespan, it is proved that no asymptotically faster algorithms can solve these problems.
Abstract: We consider the scheduling problems F2?Cmax and F2|no-wait|Cmax, i.e. makespan minimization in a two-machine flow shop, with and without no wait in process. For both problems solution algorithms based on sorting with O(n log n) running time are known, where n denotes the number of jobs. [1, 2]. We prove that no asymptotically faster algorithms can solve these problems. This is done by establishing O(n log n) lower bounds in the algebraic computation tree model of computation. Moreover, we develop for every e>0 approximation algorithms with linear running time O(n log (1/e)) that deliver feasible schedules whose makespan is at most 1+e times the optimum makespan.

Proceedings ArticleDOI
24 Jul 1998
TL;DR: This paper proves that any (possibly randomized) algorithm that produces a local minimum of a function f chosen from a sufficiently “rich” concept class, using a membership oracle for f, must ask fl(n) membership queries in the worst case, and improves the time and query complexity of known learning algorithms for the class 0( log n)-term DNF.
Abstract: In this paper we study the query complexity of finding local minimum points of a boolean function. This task occurs frequently in exact learning algorithms for many natural classes, such as monotone DNF, O (log n)-term DNF, unate DNF, and decision trees. On the negative side, we prove that any (possibly randomized) algorithm that produces a local minimum of a function f chosen from a sufficiently "rich" concept class, using a membership oracle for f, must ask (n2) membership queries in the worst case. In particular, this lower bound applies to the class of decision trees. A simple algorithm is known that achieves this lower bound. On the positive side, we show that for the class O (log n)-term DNF finding local minimum points requires only (n log n) membership queries (and more generally (tn) membership queries for t-term DNF with tn). This efficient procedure improves the time and query complexity of known learning algorithms for the class O (log n)-term DNF. 2001 Elsevier Science.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A class of optimum multiuser detection problems which can be solved with polynomial complexity in the number of users is identified; and the result is applied to a DS-CDMA system.
Abstract: In this paper, we identify a class of optimum multiuser detection problems which can be solved with polynomial complexity in the number of users; and apply the result to a DS-CDMA system.

01 Jan 1998
TL;DR: A spatial index suitable for implementation of a multidimensionally keyed database in an unreliable, decentralized, distributed environment is shown to have complexity comparable to the Internet's Domain Name Service and better than USENET or Web search engines.
Abstract: Complexity of Adaptive Spatial Indexing for Robust Distributed Data by Matthew Vincent Mahoney Thesis Advisor: Philip K. Chan, Ph.D. A spatial index suitable for implementation of a multidimensionally keyed database (such as text retrieval system) in an unreliable, decentralized, distributed environment is shown to have complexity comparable to the Internet's Domain Name Service and better than USENET or Web search engines. The index is a graph mapped into Euclidean space with high smoothness, a property allowing efficient backtrack-free directed search techniques such as hill climbing. Updates are tested using random search, then edges are adaptively added to bypass local minima, network congestion, and hardware failures. Protocols are described. Empirical average case complexity for n data items are: storage, O(n log n); query, O(log n log log n); update, O(log n log log n), provided that the number of dimensions is fixed or grows no faster than O(log n).


Book ChapterDOI
01 Jan 1998
TL;DR: This chapter bound the generalization error of a class of Radial Basis Functions, for certain well defined function learning tasks, in terms of the number of parameters and number of examples, which sheds light on ways to choose an appropriate network architecture for a particular problem.
Abstract: Feedforward networks are a class of approximation techniques that can be used to learn to perform some tasks from a finite set of examples. The question of the capability of a network to generalize from a finite training set to unseen data is clearly of crucial importance. In this chapter, we bound the generalization error of a class of Radial Basis Functions, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of the finite size of the network being used) and partly due to insufficient information about the target function because of the finite number of samples. Prior research has looked at representational capacity or sample complexity in isolation. In the spirit of A. Barron, H. White and S. Geman we develop a framework to look at both. While the bound that we derive is specific for Radial Basis Functions, a number of observations deriving from it apply to any approximation technique. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem and the kinds of problems that can be effectively solved with finite resources, i.e., with finite number of parameters and finite amounts of data.

Proceedings ArticleDOI
07 Sep 1998
TL;DR: Making use of the parity-check polynomial h(x) of a Reed-Muller code, a new algorithm for the computation of the quadratic complexity profile of a sequence is developed.
Abstract: The linear complexity of a binary sequences is an important attribute in applications such as secure communications. In this article we introduce the concept of quadratic complexity of a binary sequences. It is shown that this complexity measure is closely linked to the theory of primitive Reed-Muller codes. Making use of the parity-check polynomial h(x) of a Reed-Muller code, a new algorithm for the computation of the quadratic complexity profile of a sequence is developed. Experimental results confirm the close resemblance between expected theoretical and practical behaviour.

Book ChapterDOI
01 Jun 1998
TL;DR: Another Monte Carlo algorithm following from an original algorithm [4] is proposed and the average performance of the algorithm is polynomial and the probability that the algorithm fails to yield a correct answer for some data is less than e.g. less than 1%.
Abstract: Recently a randomized algorithm based on Davis and Putnam Procedure was designed in [16] for the purpose of solving the satisfiabilty problem. In this letter another Monte Carlo algorithm following from an original algorithm [4] is proposed. The average performance of the algorithm is polynomial and the probability that the algorithm fails to yield a correct answer for some data is less than e. Results are compared with those given in [16] and show an interesting performance for our algorithm.

Book ChapterDOI
01 Jan 1998
TL;DR: This chapter looks more carefully at the notions of being easy or hard on average, as it turns out, equating easy on average with polynomially-bounded expected running time has serious drawbacks.
Abstract: In Topic 8, we noticed that the expected running time of an algorithm depends upon the underlying distribution of the instances and may in general be different from the worst-case running time. Now we want to look more carefully at the notions of being easy or hard on average.As it turns out, equating easy on average with polynomially-bounded expected running time has serious drawbacks. But in 1984, L. Levin proposed an alternate definition and demonstrated its robustness, thereby initiating the study of average-case complexity.

Journal Article
TL;DR: This work presents an explicit continuous function f from Deny class, that can not be represented by a superposition of a lower degree functions of the same class on the first level of the superposition and arbitrary Lipshitz functions on the rest levels.
Abstract: The superposition (or composition) problem is a problem of representation of a function f by a superposition of "simpler" (in a different meanings) set Ω of functions. In terms of circuits theory this means a possibility of computing f by a finite circuit with 1 fan-out gates Ω of functions. Using a discrete approximation and communication approach to this problem we present an explicit continuous function f from Deny class, that can not be represented by a superposition of a lower degree functions of the same class on the first level of the superposition and arbitrary Lipshitz functions on the rest levels. The construction of the function f is based on particular Pointer function g (which belongs to the uniform AC0) with linear one-way communication complexity.