scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 1976"


01 Feb 1976
TL;DR: An O(n3) heuristic algorithm is described for solving d-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition and a worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3/2.
Abstract: : An O(n sup 3) heuristic algorithm is described for solving n-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition. The algorithm involves as substeps the computation of a shortest spanning tree of the graph G defining the TSP, and the finding of a minimum cost perfect matching of a certain induced subgraph of G. A worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3/2. This represents a 50% reduction over the value 2 which was the previously best known such ratio for the performance of other polynomial-growth algorithms for the TSP.

1,346 citations


Proceedings ArticleDOI
25 Oct 1976
TL;DR: Several polynomial time approximation algorithms for some NP-complete routing problems are presented, and the worst-case ratios of the cost of the obtained route to that of an optimal are determined.
Abstract: Several polynomial time approximation algorithms for some NP-complete routing problems are presented, and the worst-case ratios of the cost of the obtained route to that of an optimal are determined. A mixed-strategy heuristic with a bound of 9/5 is presented for the Stacker-Crane problem (a modified Traveling Salesman problem). A tour-splitting heuristic is given for k-person variants of the Traveling Salesman problem, the Chinese Postman problem, and the Stacker-Crane problem, for which a minimax solution is sought. This heuristic has a bound of e + 1 - 1/k, where e is the bound for the corresponding 1-person algorithm.

315 citations


Journal ArticleDOI
01 Mar 1976
TL;DR: Numerical results for a variety of network configurations indicate that the heuristic algorithm, while not theoretically convergent, yields practicable low cost solutions with substantial savings in computer processing time and storage requirements.
Abstract: The problems of file allocation and capacity assignment in a fixed topology distributed computer network are examined. These two aspects of the design are tightly coupled by means of an average message delay constraint. The objective is to allocate copies of information files to network nodes and capacities to network links so that a minimum cost is achieved subject to network delay and file availability constraints. A model for solving the problem is formulated and the resulting optimization problem is shown to fall into a class of nonlinear integer programming problems. Deterministic techniques for solving this class of problems are computationally cumbersome, even for small size problems. A new heuristic algorithm is developed, which is based on a decomposition technique that greatly reduces the computational complexity of the problem. Numerical results for a variety of network configurations indicate that the heuristic algorithm, while not theoretically convergent, yields practicable low cost solutions with substantial savings in computer processing time and storage requirements. Moreover, it is shown that this algorithm is capable of solving realistic network problems whose solutions using deterministic techniques are computationally intractable.

203 citations


Journal ArticleDOI
Warren E. Walker1
TL;DR: The algorithm is being used by the U.S Environmental Protection Agency's Office of Solid Waste Management Programs to decide on the number, type, size, and location of the disposal facilities to operate in a region, and how to allocate the region's wastes to these facilities.
Abstract: An algorithm with three variations is presented for the approximate solution of fixed charge problems. Computational experience shows it to be extremely fast and to yield very good solutions. The basic approach is 1 to obtain a local optimum by using the simplex method with a modification of the rule for selection of the variable to enter the basic solution, and 2 once at a local optimum, to search for a better extreme point by jumping over adjacent extreme points to resume iterating two or three extreme points away. This basic approach is the same as that used by Steinberg [Steinberg, D. I. 1970. The fixed charge problem. Naval Res. Log. Quart.17 217--236.], Cooper [Cooper, L. 1975. The fixed charge problem---I: A new heuristic method. Comp. & Maths, with Appls.1 89--95.], and Denzler [Denzler, D. R. 1969. An approximate algorithm for the fixed charge problem. Naval Res. Log. Quart.16 411--416.] in their algorithms, but is an extension and improvement of all three. The algorithm is being used by the U.S Environmental Protection Agency's Office of Solid Waste Management Programs to decide on the number, type, size, and location of the disposal facilities to operate in a region, and how to allocate the region's wastes to these facilities.

123 citations


Journal ArticleDOI
TL;DR: For the majority of the techniques studied, much further work remains to be done before any practical applications can be foreseen, however some methods however constitute steps in the right directions.
Abstract: The application of microprogramming in present day computers is rapidly increasing and microprogramming will undoubtedly play a major role in the next generation of computer systems. Microprogram optimization is one way to increase efficiency and can be crucial in some applications. Optimization, in this context refers to a reduction/minimization of control store and/or execution time of microprograms. The numerous strategies are classified under four broad categories: word dimension reduction, bit dimension reduction, state reduction, and heuristic reduction. The various techniques are presented, analyzed, and compared. Unfortunately, the results of the survey are not too positive. The reason is that much of the work on optimization has been devoted to obtaining the absolute minimum solutions rather than "good engineering reductions." Whether the reduction is being performed with respect to the word dimension, the bit dimension or the number of states existing techniques to obtain the optimum solution use exhaustive enumeration. Thus, the effort involved is prohibitive and there are no guarantees that significant reductions can be obtained. It is thus doubtful that an optimum solution can be justified even when the microcode produced is frequently executed. Heuristic reduction techniques do not guarantee an optimum solution but can provide some reduction with little effort. For the majority of the techniques studied, much further work remains to be done before any practical applications can be foreseen. Some methods however constitute steps in the right directions. Directions for future research are briefly outlined in the conclusions.

97 citations


Journal ArticleDOI
TL;DR: In this article, alternative iterative algorithms (conjugate gradient, nonstationary Richardson, semi-iterative) for quadratic optimization and compare their performance in image reconstruction with the previously used methods.

73 citations




Journal ArticleDOI
01 Jun 1976
TL;DR: A morphological classification of heuristic techniques is presented to serve as a step towards a design methodology of heuristics and should be a tool for the development of new and efficient heuristic Techniques.
Abstract: A morphological classification of heuristic techniques is presented. It shall serve as a step towards a design methodology of heuristic techniques. It should, therefore, be a tool for the development of new and efficient heuristic techniques.

24 citations


Journal ArticleDOI
TL;DR: Sometimes intuitively reasonable policy refinements fail, so efficient policy selection must be based on experiments of the kind presented here, which test and discuss a spectrum of policies that involve “random” selection of function values for tabulation.
Abstract: This case study concerns the use of a table of selected function values to avoid repeated function evaluations and, particularly, to speed up recursive ones. It is assumed that the table cannot hold all repeatedly needed function values owing to storage limitations (“small-table technique”). The programmer is then faced with the problem of finding a table management policy that reduces repeat evaluations to a minimum, a problem which must usually be tackled by heuristic means. We test and discuss a spectrum of policies, most of which involve “random” selection of function values for tabulation. Savings of 95% or more are easily achieved, apparently even in the limit as the computational burden increases. Policies involving table search proper are noted to be inferior. Sometimes intuitively reasonable policy refinements fail, so efficient policy selection must be based on experiments of the kind presented here. Several other programming recommendations are made, as well as suggestions for theoretical research.

20 citations


Journal ArticleDOI
TL;DR: Efficient tests are given to determine whether all greedy solutions are optimal with respect to a given set of knapsack objects or coin types.
Abstract: A natural, and readily computable, first guess at a solution to the coin changing problem is the canonical solution. This solution is a special case of the greedy solution which is a reasonable heuristic guess for the knapsack problem. In this paper, efficient tests are given to determine whether all greedy solutions are optimal with respect to a given set of knapsack objects or coin types. These results improve or extend previous tests given in the literature. Both the incomplete and complete cases are considered.

Journal ArticleDOI
TL;DR: In this article, a heuristic self-organization method for constructing a nonlinear river flow prediction model from the available data such as river flow and areal mean precipitation is presented.
Abstract: A heuristic self-organization method for constructing a nonlinear river flow prediction model from the available data such as river flow and areal mean precipitation is presented. Our algorithm, the improved version of the GMDH proposed by A. G. Ivakhnenko, is useful for the prediction of complex nonlinear systems with a large number of variables and with a small amount of available input-output data. The efficiency and usefulness of the proposed sequential prediction algorithm are shown by the use of a simulation model. This algorithm is applied to the flow prediction of the Karasu River in Japan. Numerical comparisons are performed between the prediction model by "sequential GMDH" and by the elaborate hydrologic methods, and we show that there are improvements in the newly introduced prediction algorithm for real-time computation.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the General Routing Problem approach to solving large scale routing problems and propose a heuristic to produce optimum and near optimum solutions quickly. But, the heuristics are not suitable for large-scale networks.
Abstract: This paper discusses the General Routing Problem approach to solving large scale routing problems. The General Routing Problem on network G = N; A requires finding the minimum cost cycle that visits every node in subset Q ⊆ N and that traverses every arc in a subset R ⊆ A. Utilizing special problem characteristics and the structure of real transportation networks, large reduction in effective problem size and complexity can often be made. This permits a very effective heuristic to produce optimum and near optimum solutions quickly.

Journal ArticleDOI
TL;DR: The technique known as the trainable heuristic procedure uses algorithmic procedures to gain experience on heuristic approaches to combinatorial problems and the synergistic combination of these approaches leads to a learning mechanism which has proven very effective in the solution of the assembly line balancing problem.
Abstract: This paper examines the development of a solution procedure for many types of combinatorial problems. The technique known as the trainable heuristic procedure uses algorithmic procedures to gain experience on heuristic approaches to combinatorial problems. The synergistic combination of these approaches leads to a learning mechanism which has proven very effective in the solution of the assembly line balancing problem.

Journal ArticleDOI
C. S. Parker1
TL;DR: A construction approach due to Graves and Whinston produced the best results, both when used to generate starting solutions for the improvement methods and when evaluated on its own merit against the improved methods using other starts.
Abstract: The purpose of this study is to examine the relative efficacy of several promising heuristic approaches to a classic problem of component placement. Four "construction" and nine "improvement" algorithms were chosen for investigation and compared experimentally on a CDC 6400 computer. The improvement methods were selected to test some basic strategies of pairwise-interchanging of components and the construction procedures were chosen primarily to evaluate the effects of the quality of starting solution on the improvement methods. The algorithms were tested on 75 problems generated from the literature and compared with respect to the produced solution quality and CPU run-time requirements. A construction approach due to Graves and Whinston produced the best results, both when used to generate starting solutions for the improvement methods and when evaluated on its own merit against the improvement methods using other starts. Construction approaches have previously been regarded in the the past as relatively inferior techniques.

Journal ArticleDOI
TL;DR: This paper presents a simpler proof for a result of Magazine, Nemhauser, and Trotter, which states recursive necessary and sufficient conditions for the optimality of a heuristic solution for a class of knapsack problems.
Abstract: This paper presents a simpler proof for a result of Magazine, Nemhauser, and Trotter, which states recursive necessary and sufficient conditions for the optimality of a heuristic solution for a class of knapsack problems.

Journal ArticleDOI
01 Mar 1976
TL;DR: An algorithm used to find the set of all inexact prime implicants of an inexact switching function as defined with the help of fuzzy algebra that is suitable for an efficient computer implementation.
Abstract: In this correspondence, we are concerned with an algorithm used to find the set of all inexact prime implicants of an inexact switching function as defined with the help of fuzzy algebra. The principal advantage claimed over existing methods is that it is suitable for an efficient computer implementation. The steps are easy to apply and well adapted to being programmed without any heuristic methods.


Journal ArticleDOI
TL;DR: The design and the successful implementation of a flexible system model are described, and the subsequent use of the model to control a check processing operation is used.
Abstract: Check processing in large commercial banks offers many opportunities for savings due to the great operational costs, and the time value of money. However, savings can be realized only through the control of the system. Since the check processing operation is functionally interdependent with other departments, often with conflicting objectives, changes can be difficult to make. This paper contains a description of the design and the successful implementation of a flexible system model, and the subsequent use of the model to control a check processing operation. The system model employs a multiple salesman traveling salesman algorithm, a bottleneck assignment algorithm, a dynamic programming algorithm, regression models, various heuristic routines and report generating programs, all interrelated to generate coherent solutions. The model is adaptive; as the system or environment changes so does the solution. The model is able to generate solutions which take into account the balancing of messenger crew work loads, check volumes, transit dollars, minimum distance routines, work flow smoothing, banking hours, expected branch personnel quit times, and the restrictions placed by the precise timing requirements of the delivery of payrolls, lottery tickets, drafts, and return items. An application of the system model has saved in the neighborhood of 1 million dollars in labor and transportation costs alone. A less tangible, though, none the less important benefit has been the establishment of good control of the entire process.

Journal ArticleDOI
TL;DR: This paper presents a well-posed optimality problem which defines the necessary performance of locomotion-control systems in environments showing a non-uniform distribution of survival-effective factors and the optimization algorithm involved is the “local Bremermann algorithm”.

Book ChapterDOI
01 Jan 1976
TL;DR: This chapter discusses an optimixation problem arising from tearing methods and presents a graph theoretic interpretation of the problem, based on a bipartite graph, which suggests that some optimal reduction rules can be successfully implemented.
Abstract: Publisher Summary This chapter discusses an optimixation problem arising from tearing methods. It presents a graph theoretic interpretation of the problem, based on a bipartite graph. Because of the particular structure of A, a large, sparse, nonsingular, non-symmetric matrix, sometimes it is possible to save computation time and/or storage requirements implementing tearing methods. Tearing consists mainly of two parts: at first, the solution of the system A*x = b is computed, where A* has been obtained from A zeroing some elements; then, this solution are modified to take into account the real structure of the original system. This method may be necessary, even though not convenient, when it is not possible to process the original system Ax = b because of its dimension w.r.t. the dimension of the available computer. Further work is needed to determine how far it is the solution given by the heuristic algorithm from the optimum one. It is important to devise new heuristic procedures such that more flexibility is allowed introducing the possibility of limited backtracking. The parallelism between the nonsymmetric permutation problem and the symmetric permutation one suggests that some optimal reduction rules can be successfully implemented.

01 Jan 1976
TL;DR: The fundamental nature of the map-matching problem is examined and theoretical justification for using various comparison metrics is investigated, and heuristic arguments are developed that support the use of the Product algorithm and the MAD algorithm when S/N is low and high, respectively.
Abstract: : The fundamental nature of the map-matching problem is examined and theoretical justification for using various comparison metrics is investigated Since the problem is one of statistical decision theory, the optimum solution is to compute the likelihood ratio for each comparison and choose the match point at a place where the likelihood ratio is maximum That requires a knowledge of N-dimensional joint probability distributions, hence, we resort to approximations that maximize or minimize several functions called 'metrics' By considering two-picture-element scenes, the features of various metrics are explained and compared with the likelihood ratio In this way heuristic arguments are developed that support the use of the Product algorithm (a sum of products that is related to classical correlation) when S/N is low, and the MAD algorithm (mean absolute difference) when S/N is high (Author)

Journal ArticleDOI
TL;DR: A generalized system for language analysis is described that combines LaSalle's inequality, Bayes' inequality, and Montefiore's inequality.
Abstract: A generalized system for language analysis is described

Proceedings ArticleDOI
28 Jun 1976
TL;DR: A general approach to finding optimal arrangements of objects, given a cost function for evaluating an arrangement, based on the assumption that features which are common to many weak local optima of a problem should be present in the global optimum.
Abstract: We present a general approach to finding optimal arrangements of objects, given a cost function for evaluating an arrangement. The method is based on the assumption that features which are common to many weak local optima of a problem should be present in the global optimum. The algorithm identifies such common features and uses them to create “blocks” of objects which are treated as indivisible units. We have used general-purpose algorithms which do not exploit the peculiarities of any one problem. Thus, the method described here may not be as good as an heuristic which has been tailored to a particular problem. However, it is easily adaptable to different problems, and produces many near-optimal solutions.Two examples are discussed: an electrical-net wiring problem and the traveling salesman problem.

Journal ArticleDOI
TL;DR: In this article, a multidimensional-polynomial type of regression analysis with a least-squares criterion is used to fit a general cubic model of a multicomponent, interactive growth system to observed data.

Book ChapterDOI
01 Jan 1976
TL;DR: On the one hand SD-models serve the purpose of acquiring some information for the future about the behaviour of problems described in the model, and on the other hand the SD-model should show possibilities as to how its model behaviour can be influenced in a well defined criteria.
Abstract: On the one hand SD-models serve the purpose of acquiring some information for the future about the behaviour of problems described in the model, on the other hand the SD-model should show possibilities as to how its model behaviour can be influenced in a way of well defined criteria. The latter demand is realized by the integration of the simulation model and a superior program structure which is represented by a modified feedback loop.