scispace - formally typeset
Search or ask a question
Author

Garth P. McCormick

Bio: Garth P. McCormick is an academic researcher from George Washington University. The author has contributed to research in topics: Nonlinear programming & Rate of convergence. The author has an hindex of 11, co-authored 25 publications receiving 2472 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: For nonlinear programming problems which are factorable, a computable procedure for obtaining tight underestimating convex programs is presented to exclude from consideration regions where the global minimizer cannot exist.
Abstract: For nonlinear programming problems which are factorable, a computable procedure for obtaining tight underestimating convex programs is presented. This is used to exclude from consideration regions where the global minimizer cannot exist.

2,053 citations

Book
01 Jan 1968
TL;DR: This chapter discusses unconstrained Optimization Models, which are based on Newton's Method With Variations, and Direct Algorithms for Nonlinearly Constrained Problems, which deal with the role of Linear Constraints.
Abstract: BASICS. The Nature of Optimization Problems. Analytical Background. Factorable Functions. UNCONSTRAINED PROBLEMS. Unconstrained Optimization Models. Minimizing a Function of a Single Variable. General Convergence Theory for Unconstrained Minimization Algorithms. Newton's Method With Variations. Conjugate Direction Algorithms. Quasi-Newton Methods. OPTIMALITY CONDITIONS FOR CONSTRAINED PROBLEMS. First- and Second-Order Optimality Conditions. Applications of Optimality Conditions. LINEARLY CONSTRAINED PROBLEMS. Models with Linear Constraints. Variable-Reduction Algorithms. NONLINEARLY CONSTRAINED PROBLEMS. Models with Nonlinear Constraints. Direct Algorithms for Nonlinearly Constrained Problems. Sequential Unconstrained Minimization Techniques. Sequential Constraint Linearization Techniques. OTHER TOPICS. Obtaining Global Solutions. Geometric Programming. References. Author and Subject Indexes.

363 citations

Journal ArticleDOI
TL;DR: Armijo's step-size procedure for function minimization is modified to include second derivative information and accumulation points are shown to be stationary points with positive semi-definite Hessian matrices.
Abstract: Armijo's step-size procedure for function minimization is modified to include second derivative information. Accumulation points using this procedure are shown to be stationary points with positive semi-definite Hessian matrices.

75 citations

Journal ArticleDOI
TL;DR: Convergence of the continuous version of this projective SUMT method is proved and an acceleration procedure based on the nonvanishing of the Jacobian of the Karush-Kuhn-Tucker system at a minimizer is shown to converge quadratically.
Abstract: An algorithm for solving convex programming problems is derived from the differential equation characterizing the trajectory of unconstrained minimizers of the classical logarithmic barrier function. Convergence of the continuous version of this projective SUMT method is proved under minimal assumptions. Extension of the algorithm to a form which handles linear equality constraints produces a differential equation analogue of Karmarkar's method for linear programming. The discrete version uses the same method of search and finds the step size by minimizing the logarithmic method of centers function. An acceleration procedure based on the nonvanishing of the Jacobian of the Karush-Kuhn-Tucker system at a minimizer is shown to converge quadratically. When the problem variables are bounded, dual feasible points are available and the algorithm produces at each iteration lower and upper bounds on the global minimum. A matrix approximation is given which greatly reduces the traditional problems in inverting the...

45 citations

Journal ArticleDOI
TL;DR: It is shown that algorithms for minimizing an unconstrained functionF(x), x ∈ En, which are solely methods of conjugate directions can be expected to exhibit only ann or (n−1) step superlinear rate of convergence to an isolated local minimizer.
Abstract: It is shown that algorithms for minimizing an unconstrained functionF(x), x ∈ En, which are solely methods of conjugate directions can be expected to exhibit only ann or (n−1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x).

35 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations

Book
01 Jan 1992
TL;DR: GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.
Abstract: 1 GAs: What Are They?.- 2 GAs: How Do They Work?.- 3 GAs: Why Do They Work?.- 4 GAs: Selected Topics.- 5 Binary or Float?.- 6 Fine Local Tuning.- 7 Handling Constraints.- 8 Evolution Strategies and Other Methods.- 9 The Transportation Problem.- 10 The Traveling Salesman Problem.- 11 Evolution Programs for Various Discrete Problems.- 12 Machine Learning.- 13 Evolutionary Programming and Genetic Programming.- 14 A Hierarchy of Evolution Programs.- 15 Evolution Programs and Heuristics.- 16 Conclusions.- Appendix A.- Appendix B.- Appendix C.- Appendix D.- References.

12,212 citations

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Journal ArticleDOI
TL;DR: A unified framework for the design and the performance analysis of the algorithms for solving change detection problems and links with the analytical redundancy approach to fault detection in linear systems are established.
Abstract: This book is downloadable from http://www.irisa.fr/sisthem/kniga/. Many monitoring problems can be stated as the problem of detecting a change in the parameters of a static or dynamic stochastic system. The main goal of this book is to describe a unified framework for the design and the performance analysis of the algorithms for solving these change detection problems. Also the book contains the key mathematical background necessary for this purpose. Finally links with the analytical redundancy approach to fault detection in linear systems are established. We call abrupt change any change in the parameters of the system that occurs either instantaneously or at least very fast with respect to the sampling period of the measurements. Abrupt changes by no means refer to changes with large magnitude; on the contrary, in most applications the main problem is to detect small changes. Moreover, in some applications, the early warning of small - and not necessarily fast - changes is of crucial interest in order to avoid the economic or even catastrophic consequences that can result from an accumulation of such small changes. For example, small faults arising in the sensors of a navigation system can result, through the underlying integration, in serious errors in the estimated position of the plane. Another example is the early warning of small deviations from the normal operating conditions of an industrial process. The early detection of slight changes in the state of the process allows to plan in a more adequate manner the periods during which the process should be inspected and possibly repaired, and thus to reduce the exploitation costs.

3,830 citations