About: Quadratic equation is a research topic. Over the lifetime, 14682 publications have been published within this topic receiving 246094 citations.
Papers published on a yearly basis
01 Jan 1987
TL;DR: This book describes the first unified theory of polynomial-time interior-point methods, and describes several of the new algorithms described, e.g., the projective method, which have been implemented, tested on "real world" problems, and found to be extremely efficient in practice.
Abstract: Written for specialists working in optimization, mathematical programming, or control theory The general theory of path-following and potential reduction interior point polynomial time methods, interior point methods, interior point methods for linear and quadratic programming, polynomial time methods for nonlinear convex programming, efficient computation methods for control problems and variational inequalities, and acceleration of path-following methods are covered In this book, the authors describe the first unified theory of polynomial-time interior-point methods Their approach provides a simple and elegant framework in which all known polynomial-time interior-point methods can be explained and analyzed; this approach yields polynomial-time interior-point methods for a wide variety of problems beyond the traditional linear and quadratic programs The book contains new and important results in the general theory of convex programming, eg, their "conic" problem formulation in which duality theory is completely symmetric For each algorithm described, the authors carefully derive precise bounds on the computational effort required to solve a given family of problems to a given precision In several cases they obtain better problem complexity estimates than were previously known Several of the new algorithms described in this book, eg, the projective method, have been implemented, tested on "real world" problems, and found to be extremely efficient in practice Contents : Chapter 1: Self-Concordant Functions and Newton Method; Chapter 2: Path-Following Interior-Point Methods; Chapter 3: Potential Reduction Interior-Point Methods; Chapter 4: How to Construct Self- Concordant Barriers; Chapter 5: Applications in Convex Optimization; Chapter 6: Variational Inequalities with Monotone Operators; Chapter 7: Acceleration for Linear and Linearly Constrained Quadratic Problems
01 Jun 2001
TL;DR: The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
Abstract: This is a book devoted to well-structured and thus efficiently solvable convex optimization problems, with emphasis on conic quadratic and semidefinite programming. The authors present the basic theory underlying these problems as well as their numerous applications in engineering, including synthesis of filters, Lyapunov stability analysis, and structural design. The authors also discuss the complexity issues and provide an overview of the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming. The book's focus on well-structured convex problems in conic form allows for unified theoretical and algorithmical treatment of a wide spectrum of important optimization problems arising in applications.
••29 Jun 2003
TL;DR: A method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm that shows good results on the Yosemite sequence.
Abstract: This paper presents a novel two-frame motion estimation algorithm. The first step is to approximate each neighborhood of both frames by quadratic polynomials, which can be done efficiently using the polynomial expansion transform. From observing how an exact polynomial transforms under translation a method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm. Evaluation on the Yosemite sequence shows good results.
TL;DR: In this article, a more detailed analysis of a class of minimization algorithms, which includes as a special case the DFP (Davidon-Fenton-Powell) method, has been presented.
Abstract: This paper presents a more detailed analysis of a class of minimization algorithms, which includes as a special case the DFP (Davidon-Fletcher-Powell) method, than has previously appeared. Only quadratic functions are considered but particular attention is paid to the magnitude of successive errors and their dependence upon the initial matrix. On the basis of this a possible explanation of some of the observed characteristics of the class is tentatively suggested. PROBABLY the best-known algorithm for determining the unconstrained minimum of a function of many variables, where explicit expressions are available for the first partial derivatives, is that of Davidon (1959) as modified by Fletcher & Powell (1963). This algorithm has many virtues. It is simple and does not require at any stage the solution of linear equations. It minimizes a quadratic function exactly in a finite number of steps and this property makes convergence of this algorithm rapid, when applied to more general functions, in the neighbourhood of the solution. It is, at least in theory, stable since the iteration matrix H,, which transforms the jth gradient into the /th step direction, may be shown to be positive definite. In practice the algorithm has been generally successful, but it has exhibited some puzzling behaviour. Broyden (1967) noted that H, does not always remain positive definite, and attributed this to rounding errors. Pearson (1968) found that for some problems the solution was obtained more efficiently if H, was reset to a positive definite matrix, often the unit matrix, at intervals during the computation. Bard (1968) noted that H, could become singular, attributed this to rounding error and suggested the use of suitably chosen scaling factors as a remedy. In this paper we analyse the more general algorithm given by Broyden (1967), of which the DFP algorithm is a special case, and determine how for quadratic functions the choice of an arbitrary parameter affects convergence. We investigate how the successive errors depend, again for quadratic functions, upon the initial choice of iteration matrix paying particular attention to the cases where this is either the unit matrix or a good approximation to the inverse Hessian. We finally give a tentative explanation of some of the observed experimental behaviour in the case where the function to be minimized is not quadratic.
TL;DR: In this article, a general recovery technique is developed for determining the derivatives (stresses) of the finite element solutions at nodes, which has been tested for a group of widely used linear, quadratic and cubic elements for both one and two dimensional problems.
Abstract: This is the first of two papers concerning superconvergent recovery techniques and a posteriori error estimation. In this paper, a general recovery technique is developed for determining the derivatives (stresses) of the finite element solutions at nodes. The implementation of the recovery technique is simple and cost effective. The technique has been tested for a group of widely used linear, quadratic and cubic elements for both one and two dimensional problems. Numerical experiments demonstrate that the recovered nodal values of the derivatives with linear and cubic elements are superconvergent. One order higher accuracy is achieved by the procedure with linear and cubic elements but two order higher accuracy is achieved for the derivatives with quadratic elements. In particular, an O(h4) convergence of the nodal values of the derivatives for a quadratic triangular element is reported for the first time. The performance of the proposed technique is compared with the widely used smoothing procedure of global L2 projection and other methods. It is found that the derivatives recovered at interelement nodes, by using L2 projection, are also superconvergent for linear elements but not for quadratic elements. Numerical experiments on the convergence of the recovered solutions in the energy norm are also presented. Higher rates of convergence are again observed. The results presented in this part of the paper indicate clearly that a new, powerful and economical process is now available which should supersede the currently used post-processing procedures applied in most codes.
Trending Questions (10)
Related Topics (5)