scispace - formally typeset
Search or ask a question
Author

David F. Shanno

Bio: David F. Shanno is an academic researcher from Rutgers University. The author has contributed to research in topics: Interior point method & Nonlinear programming. The author has an hindex of 34, co-authored 71 publications receiving 8025 citations. Previous affiliations of David F. Shanno include University of Toronto & University of Arizona.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a class of approximating matrices as a function of a scalar parameter is presented, where the problem of optimal conditioning of these matrices under an appropriate norm is investigated and a set of computational results verifies the superiority of the new methods arising from conditioning considerations to known methods.
Abstract: Quasi-Newton methods accelerate the steepest-descent technique for function minimization by using computational history to generate a sequence of approximations to the inverse of the Hessian matrix. This paper presents a class of approximating matrices as a function of a scalar parameter. The problem of optimal conditioning of these matrices under an appropriate norm as a function of the scalar parameter is investigated. A set of computational results verifies the superiority of the new methods arising from conditioning considerations to known methods.

3,359 citations

Journal ArticleDOI
TL;DR: Numerical comparisons with MINOS and LANCELOT show that the interior-point algorithm for nonconvex nonlinear programming is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Abstract: The paper describes an interior-point algorithm for nonconvex nonlinear programming which is a direct extension of interior-point methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.

567 citations

Journal ArticleDOI
TL;DR: The traditional Fletcher-Reeves and Polak-Ribiere algorithm may be modified in a form discovered by Perry to a sequence which can be interpreted as a memorytess BFGS algorithm and this algorithm may then be scaled optimally in the sense of Oren and Spedicalo.
Abstract: Conjugate gradient methods are iterative methods for finding the minimizer of a scalar function fx of a vector variable x which do not update an approximation to the inverse Hessian matrix. This paper examines the effects of inexact linear searches on the methods and shows how the traditional Fletcher-Reeves and Polak-Ribiere algorithm may be modified in a form discovered by Perry to a sequence which can be interpreted as a memorytess BFGS algorithm. This algorithm may then be scaled optimally in the sense of Oren and Spedicalo. This scaling can be combined with Beale restarts and Powell's restart criterion. Computational results will show that this new method substantially outperforms known conjugate gradient methods on a wide class of problems.

432 citations

Journal ArticleDOI
TL;DR: In this paper, the Monte Carlo method is used to solve for the price of a call when the variance is changing stochastically, and it is shown that the price can be computed using a fixed number of calls.
Abstract: The Monte Carlo method is used to solve for the price of a call when the variance is changing stochastically.

417 citations

Journal ArticleDOI
TL;DR: In this article, a primal-dual algorithm for linear programming is described, which allows for easy handling of simple bounds on the primal variables and incorporates free variables, which have not previously been included in a primal dual implementation.

311 citations


Cited by
More filters
Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations