scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1984"


Proceedings ArticleDOI
01 Dec 1984
TL;DR: The main conclusion is that the ratio between performance and computational complexity is far better for the IMM algorithm.
Abstract: For a linear discrete time system with Markovian coefficients a new filtering algorithm is given, which is called the Interacting Multiple Model (IMM) algorithm. The mathematical support for this algorithm is outlined and a qualitative comparison with other known filtering algorithms is made. The main conclusion is that the ratio between performance and computational complexity is far better for the IMM algorithm.

304 citations


Proceedings ArticleDOI
24 Oct 1984
TL;DR: A deterministic polynomial-time algorithm that transforms pairs (g,r), where g is any one-way function and r is a random k-bit string, to polynometric-time computable functions f/sub r/ {1,..., 2/sup k} /spl I.oarr/
Abstract: This paper develops a constructive theory of randomness for functions based on computational complexity. We present a deterministic polynomial-time algorithm that transforms pairs (g,r), where g is any one-way (in a very weak sense) function and r is a random k-bit string, to polynomial-time computable functions f/sub r/:{1,..., 2/sup k} /spl I.oarr/ {1, ..., 2/sup k/}. These f/sub r/'s cannot be distinguished from random functions by any probabilistic polynomial time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions and complexity theory.

79 citations


Journal ArticleDOI
TL;DR: A new algorithm is introduced for the parallel/variable precision case that is based on Newton's method, and has complexity asymptotically equivalent to one scalar multiplication, independent of n.
Abstract: The computational complexity of solving an $n \times n$ system of linear equations depends on whether the computational model is (a) sequential or parallel, and (b) fixed precision or variable precision. We survey known complexity results for each of the four cases, and introduce a new algorithm for the parallel/variable precision case that is based on Newton’s method. If $n^3$ processors are available, this algorithm has complexity asymptotically equivalent to one scalar multiplication, independent of n. If only $n^2$ processors are available, the complexity is proportional to but still competitive with known alternatives.

50 citations


Book ChapterDOI
11 Apr 1984
TL;DR: This work studies space contraints in p.d.a.'s language to call pushd0own complexity of a language the space needed by a p.a. to accept that language.
Abstract: We study space contraints in p.d.a.'s.We call pushd0own complexity of a language the space needed by a p.d.a. to accept that language.

9 citations


Proceedings Article
05 Sep 1984

8 citations


Dissertation
01 Jan 1984

7 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that if a 0-bounded function can be defined using 2bounded primitive recursion, then it can also be defined with simultaneous 0bounded recursions.
Abstract: Evidently F is 1-bounded (respectively 2-bounded) if F(x1,..., xm) is bounded for all x1, . . . ,xm by a linear (respectively a polynomial) function of Max{xl 5. . . , xm}. We shall prove that if a 0-bounded function can be defined using 2-bounded primitive recursion it can also be defined using simultaneous 0-bounded recursions. By a class %> of number-theoretic functions we shall always mean a collection which contains the successor function, the case function C {Cxyuv = x if u = v, Cxyuv = y\\fu^v) and which is closed under explicit definition. #„. denotes the set of relations whose characteristic functions are in 6. In [2] Grzegorczyk introduced the hierarchy S of primitive recursive functions. For n = 0 , 1 , 2, S is, roughly speaking, the smallest class closed under n-bounded recursion. This statement becomes exact if we enlarge S° to the class S by adding Max {xj, x2} as an initial function. It is an idiosyncracy of Grzegorczyk's definition that M a x ^ 0 ; thus S° is not a class. But S° and S' contain the same relations:

7 citations


Book ChapterDOI
13 Dec 1984
TL;DR: An algorithm for drawing a random sample of size M from the population of size N (M≤N) has been proposed and has the time complexity of MIN, and the space complexity of O(M).
Abstract: An algorithm for drawing a random sample of size M from the population of size N (M≤N) has been proposed. The algorithm has the time complexity of MIN {O(Mlog2M), O[(N−M)log2(N−M)]} and the space complexity of O(M).

5 citations


Book ChapterDOI
01 Oct 1984
TL;DR: In this chapter, the analysis of parallel algorithms, especially their complexity, is discussed, in which they can be implemented on a k-processor computer.
Abstract: In this chapter we discuss the analysis of parallel algorithms, especially their complexity. The complexity of serial algorithms is usually measured by the number of arithmetic operations. But the complexity of parallel algorithms is measured by the time, in which they can be implemented on a k-processor computer.

3 citations


Journal ArticleDOI
TL;DR: This note offers a different style of diagram and shows how to derive Tamine's complexity formulas from it and applies this new method to his examples.
Abstract: This is a note on J. J. Tamine's article about determining the complexity of algorithms using trees [TAMI 83] in the September SIGPLAN Notices. I enjoyed reading Tamine's article and thought his concept was worth developing. However, I had trouble understanding how to derive the formulas from the diagrams. I found that evaluating the trees is easier if the diagrams clearly show the distinction between the different types of nodes. In this note, I offer a different style of diagram and show how to derive Tamine's complexity formulas from it. Then I apply this new method to his examples. 1.0 Distinguishing Node Types In structured programming languages, part of the language describes actions to perform and part of the language describes \"structural\" relations between actions. Parts such as \"a := b;\" and \"incr (i);\" are process statements, while parts such as \"if ... then ... fi\" are control structures. We will differentiate between the process nodes and the structure nodes that represent these. Since the complexity also depends on the type of structure differentiate between three types of structure nodes: repetitions and lists. node, we will alternatives, Another important distinction is that a control structure is level than its parts. For example, in the if statement: at a higher if a = b then do (a, b, c) else do (x, y, z) fi

2 citations


Book ChapterDOI
TL;DR: Some remarks are given concerning the complexity of an exchange algorithm for Chebyshev Approximation.
Abstract: Some remarks are given concerning the complexity of an exchange algorithm for Tchebycheff Approximation. We consider an “exchange” algorithm that constructs the best polynomial of uniform approximation to a continuous function defined on a closed interval or a finite point set of real numbers. The first, and still popular, class of methods for this problem have been called “exchange algorithms”. We shall consider the simplest method of this class, a blood relative of the dual simplex method of linear programming, and a special case of the cutting plane method. The the idea of the method was initiated by Remes, [1] and [2]. See also Cheney [3], for further developments. Klee and Minty [4], (1972) showed by example that the number of steps in a Simplex method can be exponential in the dimension of the problem. Since then considerable effort has been expended trying to explain the efficiency experienced in practice. Recently, probabilistic models have been assumed that yield expected values for the number of steps with low order monomial behaviour. See for example, Borgwardt [5], and Smale [6]. Alternatively, one might ask can one somehow classify the good problems from the bad ones. We believe that this may be possible for the exchange algorithm.

01 Jan 1984
TL;DR: A parallel algorithm is presented which merges two sorted lists represented as 2–3 trees of length m and n, respectively, with at most 2m processors within O(log n) time.
Abstract: A parallel algorithm is presented which merges two sorted lists represented as 2–3 trees of length m and n (m≦ n), respectively, with at most 2m processors within O(log n) time. The consideration for the time complexity includes comparisons, allocation of processors, and construction of an output 2–3 tree. The algorithm is performed without read conflicts.