scispace - formally typeset
Search or ask a question
Topic

Minimax approximation algorithm

About: Minimax approximation algorithm is a research topic. Over the lifetime, 3231 publications have been published within this topic receiving 76402 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Abstract: Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability.

9,312 citations

Journal ArticleDOI
TL;DR: It is shown that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ.

5,593 citations

Journal ArticleDOI
TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Abstract: The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. >

2,149 citations

Book
01 Jan 1966
TL;DR: In this paper, Tchebycheff polynomials and other linear families have been used for approximating least-squares approximations to systems of equations with one unknown solution.
Abstract: Introduction: 1 Examples and prospectus 2 Metric spaces 3 Normed linear spaces 4 Inner-product spaces 5 Convexity 6 Existence and unicity of best approximations 7 Convex functions The Tchebycheff Solution of Inconsistent Linear Equations: 1 Introduction 2 Systems of equations with one unknown 3 Characterization of the solution 4 The special case 5 Polya's algorithm 6 The ascent algorithm 7 The descent algorithm 8 Convex programming Tchebycheff Approximation by Polynomials and Other Linear Families: 1 Introduction 2 Interpolation 3 The Weierstrass theorem 4 General linear families 5 the unicity problem 6 Discretization errors: General theory 7 Discretization: Algebraic polynomials. The inequalities of Markoff and Bernstein 8 Algorithms Least-squares Approximation and Related Topics: 1 Introduction 2 Orthogonal systems of polynomials 3 Convergence of orthogonal expansions 4 Approximation by series of Tchebycheff polynomials 5 Discrete least-squares approximation 6 The Jackson theorems Rational Approximation: 1 Introduction 2 Existence of best rational approximations 3 The characterization of best approximations 4 Unicity Continuity of best-approximation operators 5 Algorithms 6 Pade Approximation and its generalizations 7 Continued fractions Some Additional Topics: 1 The Stone approximation theorem 2 The Muntz theorem 3 The converses of the Jackson theorems 4 Polygonal approximation and bases in $C[a, b]$ 5 The Kharshiladze-Lozinski theorems 6 Approximation in the mean Notes References Index.

1,854 citations

Journal ArticleDOI
TL;DR: Theorem 2.7 as discussed by the authors generalizes a result of Gao and Xu [4] concerning the approximation of functions of bounded variation by linear combinations of a fixed sigmoidal function.
Abstract: We generalize a result of Gao and Xu [4] concerning the approximation of functions of bounded variation by linear combinations of a fixed sigmoidal function to the class of functions of bounded φ-variation (Theorem 2.7). Also, in the case of one variable, [1: Proposition 1] is improved. Our proofs are similar to that of [4].

1,316 citations

Network Information
Related Topics (5)
Bounded function
77.2K papers, 1.3M citations
85% related
Differential equation
88K papers, 2M citations
83% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
82% related
Markov chain
51.9K papers, 1.3M citations
82% related
Matrix (mathematics)
105.5K papers, 1.9M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20237
202214
202128
202034
201942
201849