scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 1965"


Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations


Journal ArticleDOI
TL;DR: A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the required optimum lies on one or more constraints.
Abstract: A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the required optimum lies on one or more constraints. The efficacy of using effective constraints to eliminate variables is demonstrated, and a program to achieve this easily and automatically is described. Finally, the performance of the new method (the "Complex" method) with unconstrained problems, is compared with those of the Simplex method, from which it was evolved, and Rosenbrock's method.

1,285 citations


Journal ArticleDOI
TL;DR: An original method that has comparable convergence but, unlike the classical procedure, does not require any derivatives is described and discussed in this paper.
Abstract: The minimum of a sum of squares can often be found very efficiently by applying a generalization of the least squares method for solving overdetermined linear simultaneous equations. An original method that has comparable convergence but, unlike the classical procedure, does not require any derivatives is described and discussed in this paper. The number of times the individual terms of the sum of squares have to be calculated is approximately proportional to the number of variables. Finding a solution to a set of fifty non-linear equations in fifty unknowns required the left-hand sides of the equations to be worked out fewer than two hundred times.

690 citations



Journal ArticleDOI
TL;DR: The problem of minimizing a function/(x) of n variables x = (xu x2, • • • xn) from a given approximation to the minimum XQ, has received considerable attention in recent years and two separate problems can be distinguished.
Abstract: The problem of minimizing a function/(x) of n variables x = (xu x2, • • • xn) from a given approximation to the minimum XQ, has received considerable attention in recent years In particular two separate problems can be distinguished—functions for which both the function / and the first derivatives or gradient //t>x,can be evaluated at any given point x, and functions for which only/can be evaluated Although satisfactory methods have been given by Fletcher and Powell (1963), and by Fletcher and Reeves (1964) for solving the first of these problems, the situation with regard to the latter problem is less clear Historically it was found that the simplest concepts, those of tabulation, random search, or that of improving each variable in turn, were hopelessly inefficient and often unreliable Improved methods were soon devised such as the Simplex method of Himsworth, Spendley and Hext (1962), the "pattern search" method of Hooke and Jeeves (1959), and a method due to Rosenbrock (1960) Both the latter methods have been widely used, that of Rosenbrock being probably the most efficient However, all these methods rely on an ad hoc rather than a theoretical approach to the problem Developments of gradient methods of minimization meanwhile were showing the value of iterative procedures based on properties of a quadratic function In particular the most efficient methods involved successive linear minimizations along so-called "conjugate directions" generated as the minimization proceeded An explanation of these terms is given in Fletcher and Reeves (1964)

193 citations


Journal ArticleDOI
TL;DR: This paper describes a variant of the generalized secant method for solving simultaneous nonlinear equations and it is shown that for suitable problems the method is considerably superior.
Abstract: This paper describes a variant of the generalized secant method for solving simultaneous nonlinear equations. The method is of particular value in cases where the evaluation of the residuals for imputed values of the unknowns is tedious, or a good approximation to the solution and the Jacobian at the solution are available. Experiments are described comparing the method with the Newton-Raphson process. It is shown that for suitable problems the method is considerably superior.

123 citations


Journal ArticleDOI

117 citations




Journal ArticleDOI

64 citations





Journal ArticleDOI
TL;DR: The significance of the computer in the university is discussed with three aspects of the impact which computers are having on what goes on inside a university, these three aspects being teaching, research and the computing service.
Abstract: In discussing the significance of the computer in the university I shall not be much concerned with the nature and characteristics of computers themselves, as these are fortunately beginning to be well understood. I shall be concerned with three aspects of the impact which computers are having on what goes on inside a university, these three aspects being teaching, research and the computing service.

Journal ArticleDOI
TL;DR: A detailed examination of binary search trees reveals that the probability of making precisely i comparisons in placing the (n—l)th item in the tree is related to the ( n—i)th symmetric function of the integers 1, . . . , n.
Abstract: A detailed examination of binary search trees reveals that the probability of making precisely i comparisons in placing the (n—l)th item in the tree is related to the (n—i)th symmetric function of the integers 1, . . . , n. A recurrence relation for the moments of this distribution of comparisons is derived, and formulas for the mean number of comparisons and its variance are displayed. These are shown to be in accord with previously published values.

Journal ArticleDOI
Alan J. Melbourne1, John M. Pugmire1
TL;DR: The nature of scientific work makes accurate forecasting of computer time difficult, and renting time on a large installation is not entirely satisfactory because of the tight jobscheduling involved.
Abstract: (ii) The computer must be readily available. The nature of scientific work makes accurate forecasting of computer time difficult. For this reason, renting time on a large installation is not entirely satisfactory because of the tight jobscheduling involved. A small computer locally installed seems preferable if provided with a compiler. Compilation, however, is a timeconsuming process and may take longer than running the final compiled program. It should be reduced to a minimum.




Journal ArticleDOI
TL;DR: A method for the acceleration of the convergence of iterative procedures is described and applied to linear and non-linear problems and physical considerations based on Stokes' theorem are utilized to modify Southwell's relaxation technique.
Abstract: The numerical solutioit of vector field problems on digital computers is a slow process, especially when the characteristics of the regions investigated vary considerably. A method for the acceleration of the convergence of iterative procedures is described and applied to linear and non-linear problems. Physical considerations based on Stokes' theorem are utilized to modify Southwell's relaxation technique.

Journal ArticleDOI
TL;DR: Dantzig and Wolfe's Decomposition principle shows how to take advantage of the special structure of linear programming problems that can be considered as separate sub-problems with a relatively small number of linking equations.
Abstract: The problem concerned the production of oil from several different fields to meet a fixed overall target over a finite span of years. Typically we were concerned with seven different fields supplying three outlets (ports or refineries) over a span of twelve years. The problem was formulated as a linear program, the object being to meet the overall target and to maximize an expression representing the net profit over the time span being considered. It naturally decomposed into sub-problems because the linking equations between the various fields were quite few in number. Dantzig and Wolfe's Decomposition principle shows how to take advantage of the special structure of linear programming problems that can be considered as separate sub-problems with a relatively small number of linking equations. (Dantzig and Wolfe (1960), Dantzig and Wolfe (1961) and Dantzig (1963)). The linking equations are grouped together into what is known as the master problem, whilst each sub-problem contains the constraints and equations that can naturally be grouped together. In this problem the operations of each field under consideration were expressed in separate sub-problems. The constraints in the sub-problems deal with the construction of new production facilities in each of the years being considered. These facilities include new oil wells, and also plant such as gas/oil separators which are required to handle the oil once it has reached the surface. There are also equations representing the productive capacity of both existing and new wells, and constraints on the upper limits of the capacity of the field. A solution to a sub-problem is a way of operating a field, i.e. a set of annual productions with the corresponding investments required to make the productions possible. The master problem consists of the linking equations dealing with the supply of oil from the fields to the outlets. It also deals with the possibility of exploring for new oilfields. And, of course, it contains the main supply equations which say that the sum of productions from all the fields in any year must equal the overall target for that year.







Journal ArticleDOI
TL;DR: A one-step method for the numerical integration of the ordinary differential equation y" = f(x)y + g{x), y(o) — .Vo> y(*o)— yo based on the Gauss two-point rule is developed.
Abstract: 1. The numerical integration of ordinary differential equations by the use of Gaussian quadrature methods was introduced into the literature by Hammer and Hollingsworth (1955), for subsequent developments, see Morrison and Stoller (1958), Korganoff (1958), Kuntzmann (1961), Henrici (1962). In this paper we develop a one-step method for the numerical integration of the ordinary differential equation y" = f(x)y + g{x), y(o) — .Vo> y(*o) — yo based on the Gauss two-point rule (see Hildebrand, 1956). Theoretical and computational comparison of the new method with other methods is given.