scispace - formally typeset

Topic

Quadratic function

About: Quadratic function is a(n) research topic. Over the lifetime, 5724 publication(s) have been published within this topic receiving 120366 citation(s). The topic is also known as: 2nd degree polynomial & polynomial of degree 2.
Papers
More filters

Journal ArticleDOI
Abstract: We propose a new trust region approach for minimizing nonlinear functions subject to simple bounds. By choosing an appropriate quadratic model and scaling matrix at each iteration, we show that it is not necessary to solve a quadratic programming subproblem, with linear inequalities, to obtain an improved step using the trust region idea. Instead, a solution to a trust region subproblem is defined by minimizing a quadratic function subject only to an ellipsoidal constraint. The iterates generated by these methods are always strictly feasible. Our proposed methods reduce to a standard trust region approach for the unconstrained problem when there are no upper or lower bounds on the variables. Global and quadratic convergence of the methods is established; preliminary numerical experiments are reported.

2,780 citations


Book ChapterDOI
W. James1, Charles Stein2Institutions (2)
01 Jan 1992-
Abstract: It has long been customary to measure the adequacy of an estimator by the smallness of its mean squared error. The least squares estimators were studied by Gauss and by other authors later in the nineteenth century. A proof that the best unbiased estimator of a linear function of the means of a set of observed random variables is the least squares estimator was given by Markov [12], a modified version of whose proof is given by David and Neyman [4]. A slightly more general theorem is given by Aitken [1]. Fisher [5] indicated that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators. This paper will be concerned with optimum properties or failure of optimum properties of the natural estimator in certain special problems with the risk usually measured by the mean squared error or, in the case of several parameters, by a quadratic function of the estimators. We shall first mention some recent papers on this subject and then give some results, mostly unpublished, in greater detail.

2,545 citations


Journal ArticleDOI
C. G. Broyden1Institutions (1)
Abstract: This paper presents a more detailed analysis of a class of minimization algorithms, which includes as a special case the DFP (Davidon-Fletcher-Powell) method, than has previously appeared. Only quadratic functions are considered but particular attention is paid to the magnitude of successive errors and their dependence upon the initial matrix. On the basis of this a possible explanation of some of the observed characteristics of the class is tentatively suggested. PROBABLY the best-known algorithm for determining the unconstrained minimum of a function of many variables, where explicit expressions are available for the first partial derivatives, is that of Davidon (1959) as modified by Fletcher & Powell (1963). This algorithm has many virtues. It is simple and does not require at any stage the solution of linear equations. It minimizes a quadratic function exactly in a finite number of steps and this property makes convergence of this algorithm rapid, when applied to more general functions, in the neighbourhood of the solution. It is, at least in theory, stable since the iteration matrix H,, which transforms the jth gradient into the /th step direction, may be shown to be positive definite. In practice the algorithm has been generally successful, but it has exhibited some puzzling behaviour. Broyden (1967) noted that H, does not always remain positive definite, and attributed this to rounding errors. Pearson (1968) found that for some problems the solution was obtained more efficiently if H, was reset to a positive definite matrix, often the unit matrix, at intervals during the computation. Bard (1968) noted that H, could become singular, attributed this to rounding error and suggested the use of suitably chosen scaling factors as a remedy. In this paper we analyse the more general algorithm given by Broyden (1967), of which the DFP algorithm is a special case, and determine how for quadratic functions the choice of an arbitrary parameter affects convergence. We investigate how the successive errors depend, again for quadratic functions, upon the initial choice of iteration matrix paying particular attention to the cases where this is either the unit matrix or a good approximation to the inverse Hessian. We finally give a tentative explanation of some of the observed experimental behaviour in the case where the function to be minimized is not quadratic.

2,046 citations


Book ChapterDOI
Gunnar Farnebäck1Institutions (1)
29 Jun 2003-
TL;DR: A method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm that shows good results on the Yosemite sequence.
Abstract: This paper presents a novel two-frame motion estimation algorithm. The first step is to approximate each neighborhood of both frames by quadratic polynomials, which can be done efficiently using the polynomial expansion transform. From observing how an exact polynomial transforms under translation a method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm. Evaluation on the Yosemite sequence shows good results.

1,885 citations


Book ChapterDOI
01 Aug 1976-
TL;DR: It is shown that for stationary inputs the LMS adaptive algorithm, based on the method of steepest descent, approaches the theoretical limit of efficiency in terms of misadjustment and speed of adaptation when the eigenvalues of the input correlation matrix are equal or close in value.
Abstract: This paper describes the performance characteristics of the LMS adaptive filter, a digital filter composed of a tapped delay line and adjustable weights, whose impulse response is controlled by an adaptive algorithm. For stationary stochastic inputs, the mean-square error, the difference between the filter output and an externally supplied input called the "desired response," is a quadratic function of the weights, a paraboloid with a single fixed minimum point that can be sought by gradient techniques. The gradient estimation process is shown to introduce noise into the weight vector that is proportional to the speed of adaptation and number of weights. The effect of this noise is expressed in terms of a dimensionless quantity "misadjustment" that is a measure of the deviation from optimal Wiener performance. Analysis of a simple nonstationary case, in which the minimum point of the error surface is moving according to an assumed first-order Markov process, shows that an additional contribution to misadjustment arises from "lag" of the adaptive process in tracking the moving minimum point. This contribution, which is additive, is proportional to the number of weights but inversely proportional to the speed of adaptation. The sum of the misadjustments can be minimized by choosing the speed of adaptation to make equal the two contributions. It is further shown, in Appendix A, that for stationary inputs the LMS adaptive algorithm, based on the method of steepest descent, approaches the theoretical limit of efficiency in terms of misadjustment and speed of adaptation when the eigenvalues of the input correlation matrix are equal or close in value. When the eigenvalues are highly disparate (λ max /λ min > 10), an algorithm similar to LMS but based on Newton's method would approach this theoretical limit very closely.

1,398 citations


Network Information
Related Topics (5)
Differential equation

88K papers, 2M citations

86% related
Matrix (mathematics)

105.5K papers, 1.9M citations

85% related
Nonlinear system

208.1K papers, 4M citations

83% related
Bounded function

77.2K papers, 1.3M citations

83% related
Partial differential equation

70.8K papers, 1.6M citations

82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20229
2021196
2020226
2019228
2018216
2017230

Top Attributes

Show by:

Topic's top 5 most impactful authors

Jaume Llibre

40 papers, 410 citations

Alain Billionnet

11 papers, 496 citations

Claudia Valls

8 papers, 38 citations

Robert Shorten

6 papers, 176 citations

Ahmed Laghribi

6 papers, 36 citations