scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 1971"


Book
01 Jan 1971
TL;DR: In this paper, the Hahn-Banach theorem and dual extremal problems are applied to the problem of extremal problem in homogeneous Banach spaces, and the interpolation formula and Gaussian quadrature are used.
Abstract: Best uniform approximation.- The interpolation formula and Gaussian quadrature.- Best approximation and extremal problems in other norms.- Applications of the Hahn-Banach theorem and dual extremal problems.- Approximation theory and extremal problems in Hilbert spaces.- Minimal extrapolation of Fourier transforms.- General aspects of "Degree of approximation".- Approximation theory in homogeneous Banach spaces.

224 citations


Proceedings ArticleDOI
13 Oct 1971
TL;DR: It is shown that, provided the degree of the polynomial to be evaluated exceeds k[log"2k], an algorithm given is within one time unit of optimality.
Abstract: Algorithms for the evaluation of polynomials on a hypothetical computer with k independent arithmetic processors are presented. It is shown that, provided the degree of the polynomial to be evaluated exceeds k[log"2k], an algorithm given is within one time unit of optimality.

38 citations


Journal ArticleDOI
TL;DR: The results may be used to test for convergence in computer-aided network optimization, in tests for optimality in the Chebyshev sense of any given design, and to gain insight which may be helpful in developing minimax approximation algorithms.
Abstract: This paper derives and discusses necessary conditions for an optimum in nonlinear minimax approximation problems. A straightforward geometrical interpretation is presented. The results may be used to test for convergence in computer-aided network optimization, in tests for optimality in the Chebyshev sense of any given design, and to gain insight which may be helpful in developing minimax approximation algorithms.

27 citations


Journal ArticleDOI
01 Oct 1971
TL;DR: It has been proved by two different methods that the algorithm converges to the sought value in the mean-square sense and with probability one.
Abstract: A new algorithm for stochastic approximation has been proposed, along with the assumptions and conditions necessary for convergence. It has been proved by two different methods that the algorithm converges to the sought value in the mean-square sense and with probability one. The rate of convergence of the new algorithm is shown to be better than two existing algorithms under certain conditions. Results of simulation have been given, making a realistic comparison between the three algorithms.

20 citations


Journal ArticleDOI
01 Apr 1971
TL;DR: The multidimensional Kiefer-Wolfowitz stochastic approximation algorithm is proposed for the performance-adaptive self-organizing control of a class of distributed systems.
Abstract: The multidimensional Kiefer-Wolfowitz stochastic approximation algorithm is proposed for the performance-adaptive self-organizing control of a class of distributed systems. The class of systems considered is that which can be modeled mathematically by the general linear second-order elliptic equation with unknown coefficients and are defined on a fixed and bounded domain ?. Noisy measurements of the response are assumed available at a finite number of fixed points in ?. The minimum number of measurement points necessary for recovery of the significant part of the response is determined by application of an n-dimensional extension of the sampling theorem.

7 citations


Journal ArticleDOI
01 Jul 1971
TL;DR: The Robbins-Monro algorithm for finding the root of a linear regression function suggests the use of fixed step size stochastic approximation algorithms to solve more general quasi-stationary estimation problems.
Abstract: Linear estimation under a minimum mean-square-error criterion in a quasi-stationary environment is considered. A generalized form of the Widrow-Hoff algorithm is employed for the estimation. Performance is measured by the excess error over the minimum meansquare error. A Gaussian assumption is used to determine this performance and determine simple bounds. The transient solution for the algorithm is investigated and a convergence rate determined. These results are used to optimize the algorithm parameters and bound the performance as a function of the environmental rate of change. The Robbins-Monro algorithm for finding the root of a linear regression function suggests the use of fixed step size stochastic approximation algorithms to solve more general quasi-stationary estimation problems.

7 citations


Journal ArticleDOI
TL;DR: A general stochastic approximation algorithm is given along with assumptions and conditions necessary to show that it converges, and convergence is proven in the mean-square sense.
Abstract: A general stochastic approximation algorithm is given along with assumptions and conditions necessary to show that it converges. Convergence is proven in the mean-square sense. The rate of convergence is shown to be better than two algorithms proposed previously.

1 citations


Book ChapterDOI
01 Jan 1971
TL;DR: This paper will discuss two types of algorithms which have been developed to generate recursive estimates that are linear or nonlinear functions of the past measurements that will be discussed in terms of their computational structure as well as their statistical properties such as mean square error.
Abstract: Frequently in the design of on-line learning systems for pattern recognition or system identification, there is a need to construct successive estimates for the unknown parameters of some underlying probability distribution. One of the most widely used methods for this purpose is stochastic approximation. It is well known that stochastic approximation is concerned with the successive estimation algorithms which converge to the true value of some sought (unknown) parameter when, due to the random nature of the system environment, the measurements are inevitably noisy. The algorithms of most interest to on-line pattern recognition or system identification are those which have following properties: (1) they are self-correcting, that is, the error of estimates tends to vanish in the limit, and (2) their convergence to the true value of an unknown parameter is of some specific nature, for example, in mean square or with probability one. This paper will discuss two types of algorithms which have been developed to generate recursive estimates that are linear or nonlinear functions of the past measurements. These algorithms will be discussed in terms of their computational structure as well as their statistical properties such as mean square error. In particular, comparison will be made between linear and nonlinear algorithms on the basis of specific assumptions about the unknown inputs and parameters characterizing the learning environment.

Proceedings ArticleDOI
13 Oct 1971
TL;DR: This paper formulated for demand paging systems shows that the class of replacement rules so defined is precisely that class of rules having the wellknown inclusion property.
Abstract: Given a particular computer system the extension problem concerns the prediction of performance when the size of main memory is increased. In this paper a specific approach to this problem is formulated for demand paging systems. A necessary and sufficient condition on the nature of page replacement rules which leads to solutions of the extension problem is a major result of the paper. As the other principal result we show that the class of replacement rules so defined is precisely that class of rules having the wellknown inclusion property. The paper concludes with a general discussion of topics related to the extension problem.