scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 1979"


Book
31 Mar 1979
TL;DR: This book is the first in the MIT Press Series in Signal Processing, Optimization, and Control and will be of interest to transportation systems analysts, urban and regional planners, industrial, communication and systems engineers, and academicians and professionals in management, geography, engineering, operations research, and applied mathematics.
Abstract: Many kinds of organizations--large and small, public and private--are faced with decisions on where to locate their facilities so that they can provide optimal utility both to those who benefit from them and to the organization itself. What are the best available sites for locating--to give just a few examples--a system of fire stations, health outreach clinics, parking garages, shopping malls, radar stations, computer centers, elementary schools, police patrol cars, switching centers, warehouses? The (spatial) location of such facilities usually takes place in the context of a given transportation, communication, or transmission system, that for analytic purposes may be represented as a network of nodes (population centers, terminals, manufacturing sites) and links (highways, railways, telephone cables) connecting pairs of nodes.This study is concerned with the analytical aspects of facility location in systems where such an underlying network structure exists. While numerous texts have been published both on network analysis and on location theory (based on Euclidean or rectangular measures, as distinct from network distance measures defined in terms of such variables as maximum response time), this text is the first book-length treatment of the intersection of the two topics. The book serves both to summarize the present range of this approach and to advance it measurably.The authors write that "The orientation of this book is toward algorithmic solutions to network location problems [which are] often formulated as discrete optimization problems. While our primary concern is with practical optimum-seeking methods, we also discuss some heuristic approaches as well as results concerning upper bounds on the computational complexity of these problems...."The prerequisites for this text would be satisfied by an introductory course in operations research with primary emphasis on optimization techniques including linear programming and complex analysis. Prior knowledge of network analysis (graph theory) is useful though not essential. (An appendix is devoted to a brief introduction to the relevant concepts of graph theory.) A basic understanding of probability theory is also needed for some of the material. We have tried to keep the level of mathematics to a minimum beyond these prerequisites. We hope that the text will be of interest to transportation systems analysts, urban and regional planners, industrial, communication, and systems engineers, and academicians and professionals in management, geography, engineering, operations research, and applied mathematics."This book is the first in the MIT Press Series in Signal Processing, Optimization, and Control, edited by Alan S. Willsky.

273 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the situation of a deterministic demand pattern having a linear trend and select the timing and sizes of replenishments so as to keep the total of replenishment and carrying costs as low as possible.
Abstract: We consider the situation of a deterministic demand pattern having a linear trend. The problem is to select the timing and sizes of replenishments so as to keep the total of replenishment and carrying costs as low as possible. An earlier developed heuristic for the general case of a deterministic, time-varying, demand pattern is specialized to the case of a linear trend. The simple decision rule is shown to lead to small cost penalties in two examples that have been exactly analyzed in an earlier article in this journal.

231 citations


Journal ArticleDOI
TL;DR: The combinatorial problem of clusterwise discrete linear approximation is defined as finding a given number of clusters of observations such that the overall sum of error sum of squares within those clusters becomes a minimum.
Abstract: The combinatorial problem of clusterwise discrete linear approximation is defined as finding a given number of clusters of observations such that the overall sum of error sum of squares within those clusters becomes a minimum. The FORTRAN implementation of a heuristic solution method and a numerical example are given.

203 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the planning of individual machine groups or work centers producing many different products such as components, subassemblies or assemblies and proposed an efficient heuristic which is an extension of the Eisenhut heuristic.
Abstract: This paper considers the planning of individual machine groups or work centers producing many different products such as components, subassemblies or assemblies. The deterministic multi-item lot size problem with limited capacity has attracted much attention during the past two decades but no efficient optimization techniques are available up to now. We therefore suggest an efficient heuristic which is an extension of the Eisenhut heuristic. The resulting production programs always adhere to the characteristics of the dominant schedules. Special attention is given to the characterization of these dominant schedules.

160 citations


Proceedings ArticleDOI
30 May 1979
TL;DR: The goal of this work is to design mechanisms that can automatically select a near-optimal attribute partition of a file's attributes, based on the usage pattern of the file and on the characteristics of the data in the file, through a heuristic search technique.
Abstract: One technique that is sometimes employed to enhance the performance of a database management system is known as attribute partitioning. This is the process of dividing the attributes of a file into separately stored subfiles. By storing together those attributes that are frequently requested together by transactions, and by separating those that are not, attribute partitioning can reduce the number of pages that are transferred from secondary storage to primary memory in the processing of a transaction.The goal of this work is to design mechanisms that can automatically select a near-optimal attribute partition of a file's attributes, based on the usage pattern of the file and on the characteristics of the data in the file. The approach taken to this problem is based on the use of an accurate partition evaluator and of a heuristic that guides a search through the large space of possible partitions. The heuristics propose a small set of promising partitions to submit for detailed analysis. The evaluator assigns a figure of merit to any proposed partition that reflects the cost that would be incurred in processing the transactions in the usage pattern if the file were partitioned in the proposed way.We have implemented an evaluator for a particular model database system and have developed a heuristic search technique. A series of experiments has demonstrated the accuracy and efficiency of this heuristic.

141 citations


Book ChapterDOI
TL;DR: This chapter presents the results of a small-scale research study conducted at Berkeley in 1977–1978 to explore students' mastery of five heuristic strategies, describing the problem-solving processes of seven upper-division college students working a series of problems that can be solved by the application of one or more heuristic Strategies.
Abstract: This chapter presents the results of a small-scale research study conducted at Berkeley in 1977–1978 to explore students' mastery of five heuristic strategies. It describes the problem-solving processes of seven upper-division college students working a series of problems that can be solved by the application of one or more heuristic strategies. In the study, two small groups of students were given practice on and shown the solutions to a set of problems that could be solved by heuristic methods. The amount of time working the problems was identical for the two groups and the solutions they were shown were nearly identical. Pretests and post-tests were given individually and the students were recorded as they worked the problems out loud. Two comparisons of pretest-to-post-test gains, with regard to two different scoring procedures, indicated that the experimental group significantly outperformed the control group. More important, however, are the results provided by the protocol data. Explicit heuristic instruction makes or can make a difference with regard to problem-solving performance.

119 citations


Journal ArticleDOI
TL;DR: In this paper, an analytic cost expression for processing conjunctive, disjunctive, and batched queries is developed and an effective heuristic for minimizing query processing costs is presented.
Abstract: A transposed file is a collection of nonsequential files called subfiles. Each subfile contains selected attribute data for all records. It is shown that transposed file performance can be enhanced by using a proper strategy to process queries. Analytic cost expressions for processing conjunctive, disjunctive, and batched queries are developed and an effective heuristic for minimizing query processing costs is presented. Formulations of the problem of optimally processing queries for a particular family or transposed files are shown to be NP-complete. Query processing performance comparisons of multilist, inverted, and nonsequential files with transposed files are also considered.

114 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm employing dynamic programming is presented for solving the two-dimensional cutting stock problem where all the small rectangles are of the same dimensions, but without the usual restriction that the cutting be done with “guillotine” cuts.
Abstract: A heuristic algorithm employing dynamic programming is presented for solving the two-dimensional cutting stock problem where all the small rectangles are of the same dimensions, but without the usual restriction that the cutting be done with “guillotine” cuts, i.e., cut which must be made in stages from one edge to the opposite edge of the large rectangle being cut. The objective of the algorithm is to determine a cutting or layout pattern for which the ratio of the unused area to the total area of the large rectangle tends to be small. To demonstrate the method, the common problem of establishing standardized loading patterns for rectangular items on pallets is examined in detail. The algorithm is described with a minimum of mathematics through the use of several pictorial displays and a simple example. The efficiency of the heuristic is then evaluated by comparing computer generated loading patterns for 182 different size items to the loading patterns recommended by the U.S. Navy, and shown to be 10.4 percent more efficient for 64 out of 182 cases when the number of items per layer were not identical. The algorithm is also shown to be an effective aid to management both in establishing standardized loading patterns and procedures, and in communicating these loading standards to production personnel via computer generated “shop paper.” This type of computer design flexibility and control is a valuable management tool not only for standardizing pallet arrangements, but for carton design and consolidation, warehouse design and layout, bin and shelf stocking, designing tapes for numerically controlled gas cutting machines, and numerous other industrial problems involved with the efficient layout of rectangular objects.

96 citations


Journal ArticleDOI
TL;DR: The study reported in this paper focuses on a heuristic which can handle reasonably large problems, and yet can be simply and economically implemented.
Abstract: In this paper we consider the problem of scheduling “n” independent fades on “m” parallel processors. Each job consists of a single operation with a specific processing time and due date. The processors are identical and the operation of the system is non-preemptive. The objective is to schedule the jobs in such a way that the total tardiness of the n jobs is as small as possible. For the case of a single processor with n jobs, there exists algorithms which provide optimal solutions. On the other hand currently available optimal scheduling algorithms for multiple processors can handle only small problems. Therefore, practitioners are forced to use heuristic methods to schedule their jobs on multiple processors. This raises questions of the following nature: “Are we scheduling our jobs reasonably well? Are there other schedules with which our total tardiness can be lowered substantially? How far off might the heuristic solution be, from the optimal solution?” The study reported in this paper focuses on a h...

73 citations


Journal ArticleDOI
01 Jun 1979
TL;DR: A fuzzy approach to DM is described, incorporating linguistic variables, relations, and algorithms, which helps to solve the problem of partial utilities and their interdependence.
Abstract: Multiattribute decisionaking (DM) is treated as a special kind of structured human problem solving. Emphasis is placed on the use of the available knowledge about utilities, which is obtained by combining heuristics and traditional aggregation methods. In this way, the problem of partial utilities and their interdependence may be solved. A fuzzy approach to DM is described, incorporating linguistic variables, relations, and algorithms. It is summarized in a formal model and illustrated by an example.

71 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of finding a "good" lot size and reorder point policy for an inventory which is subject to continuous exponential decay is considered, and the method of analysis is an approximate approach which parallels that of Hadley and Whitin in the nodecay case.
Abstract: In this paper we consider the problem of finding a “good” lot size and reorder point policy for an inventory which is subject to continuous exponential decay. Examples of decaying inventories include alcohol, certain types of food items, and radioactive materials. The method of analysis is an approximate approach which parallels that of Hadley and Whitin in the no-decay case. To evaluate the effectiveness of the approximation, a computer simulation is developed to estimate and evaluate the best (Q, r) policy. A comparison of the approximate and simulated policies is made for a total of thirty test cases resulting in a maximum cost error of only 2.77%.

Journal ArticleDOI
TL;DR: An algorithm MUST which generates alternative solutions of equal quality for single-model assembly line balancing problems, based on the Mansoor-Yadin algorithm, which has been successfully used in balancing assembly lines having up to 140 work elements with widely differing precedence structures.
Abstract: Research on single-model assembly line balancing has produced several good algorithms for solving large problems. All these algorithms, however, generate just one solution to the balancing problem. With these, line designers wishing to investigate alternative station combinations of work elements are forced to do so manually. This paper concerns an algorithm MUST which generates alternative solutions of equal quality for single-model assembly line balancing problems. The method is based on the Mansoor-Yadin algorithm and in its optimum seeking form, MUST generates all existing optimal balances in a single pass. Three heuristics, used singularly or together, are introduced for solving the larger balancing problems. These are designed to produce about 100 different balance solutions of equal quality at each pass. MUST has been successfully used in balancing assembly lines having up to 140 work elements with widely differing precedence structures. A comparison of this method with MALB, one of the most efficient known heuristic methods, results in MUST dominating or equalling MALB in every case. Reasonable computation times average 125.4 sec.---IBM 370/168 and core usage are achieved through the use of advanced computation methods. The need to explore alternative station combinations may arise in several ways---for example, the line designer may prefer but not require that certain work elements be allocated to a common station either because of: access to materials handling and storage facilities, service availability e.g., compressed air lines, use of common tools or operator skills, etc. Alternatively, minor adjustments to the line may be required between adjacent stations in order to alleviate conditions at one station which exhibits excessive variability in its performance times. The generation of multiple solutions of equal efficiency adds a new dimension to the quality of balancing effectiveness, enabling the line designer to select the alternative that best suits his requirements.

Journal ArticleDOI
TL;DR: It is demonstrated that the dynamic programming approach is computationally reasonable, in an operational sense, only for small family sizes and for large families heuristic solution methods appear necessary.
Abstract: We consider a group (or family) of items having deterministic, but time-varying, demand patterns. The group is defined by a setup-cost structure that makes coordination attractive (a major setup cost for each group replenishment regardless of how many of the items are involved). The problem is to determine the timing and sizes of the replenishments of all of the items so as to satisfy the demand out to a given horizon in a cost-minimizing fashion. A dynamic programming formulation is illustrated for the case of a two-item family. It is demonstrated that the dynamic programming approach is computationally reasonable, in an operational sense, only for small family sizes. For large families heuristic solution methods appear necessary.

Journal ArticleDOI
TL;DR: In this article, the authors reduced the admissibility of statistical estimators to considerations involving differential inequalities, and then applied the heuristic method to generate conjectures concerning the admissible of certain generalized Bayes procedures in these problems.
Abstract: Questions of admissibility of statistical estimators are reduced to considerations involving differential inequalities. The coefficients of these inequalities involve moments of the underlying distributions; and so are, in principle, not difficult to derive. The methods are "heuristic" because it is necessary to verify on an ad-hoc basis that error terms are small. Some conditions on the structure of the problem are given which we believe will guarantee that these error terms are small. Several different statistical estimation problems are discussed. Each problem is transformed (if necessary) so as to meet the above mentioned structure conditions. Then the heuristic method is applied in order to generate conjectures concerning the admissibility of certain generalized Bayes procedures in these problems.

Journal ArticleDOI
TL;DR: A new method for planning the routes of a fleet of carriers subject to a maximum load restriction is outlined, derived from a combination of the well-known "savings" heuristic rule and Monte Carlo simulation.
Abstract: A new method for planning the routes of a fleet of carriers subject to a maximum load restriction is outlined. It is derived from a combination of the well-known "savings" heuristic rule and Monte Carlo simulation. Without increasing the level of complexity of the search routine beyond that already employed in "savings" based programs a marked reduction in total route length can be obtained. This is demonstrated with the aid of three much-considered problems, and for one of these, distances have been found that are below any previously recorded, even for those algorithms with the support of more elaborate logistics.

Proceedings ArticleDOI
29 Oct 1979
TL;DR: This work considers techniques for adapting linear lists so that the more frequently accessed elements are found near the front, even though the authors are not told the probabilities of various elements being accessed.
Abstract: We consider techniques for adapting linear lists so that the more frequently accessed elements are found near the front, even though we are not told the probabilities of various elements being accessed. The main results are discussed in two sections. Perhaps the most interesting deals with techniques which move an element toward the front only after it has been requested k times in a row. The other, technically more difficult, section deals with the analysis of the heuristic which moves an element to the head of the list each time it is accessed. The behaviour of this scheme under a number of interesting probability distributions is discussed. Two basic approaches to the technique of moving an element forward after it has been accessed k times in a row are discussed. The first performs the transformation after any k identical requests. The second essentially groups requests into batches of at least k, and performs the action only if the last k requests of a batch are the same. Adopting as the transformation, the moving of the requested element to the front of the list, the second approach is shown to lead to faster average search time under all nontrivial probability distributions for k ≥2. It is also shown that the "periodic" approach, with k = 2, never leads to an average search time greater than 1.21.. times that of the optimal ordering. For the more direct approach, a ratio of 1.36.. is shown under the same constraints. In studying the simple move to front heuristic (i.e. k = 1), it is shown that for a particular distribution this scheme can lead to an average number of probes π/2 times that of the optimal order. Within an interesting class of distributions, this is shown to be the worst average behaviour.

Journal ArticleDOI
TL;DR: In this article, a set of strategies for intervening in paranoid conditions using cognitive therapy is proposed, and the suggested strategies are explainable in terms of the model of the patient's input/output linguistic behavior.
Abstract: By the phrase \"paranoid conditions\" we are referring to patients who show delusions of persecution and/or grandeur, ideas of self-reference, and hypersensitivity. These conditions can be found in association with other physical and psychiatric disorders or by themselves. We have constructed and tested a computer simulation model of paranoid condition (Colby, 1975, 1976; Faught, Colby, & Parkison, 1977). This model's input/output linguistic behavior imitates that of a paranoid patient in an initial diagnostic interview. The model has successfully passed Turing's test in which expert clinical interviewers were unable to distinguish human patient from computer model (Heiser, Colby, Faught, & Parkison, 1979). In this paper we suggest a set of strategies for intervening in paranoid conditions using cognitive therapy. The suggested strategies are explainable in terms of the model. However, at this stage they are purely heuristic (intended to aid discovery) and speculative. To become established as clinically useful, they must be tested with actual human patients. For a system of psychotherapy to be taken as a serious candidate for a rational technology, it must demonstrate its effectiveness in systematic therapeutic trials using independent judges, control groups, quantitative measures, and long-term follow-ups. Among the current therapeutic

Proceedings Article
20 Aug 1979
TL;DR: The implementation of an automated consultant for MACSYMA, called the Advisor, is described, based on the assumption that the MAC SYMA novice behaves in accordance with a standard heuristic problem solving algorithm, called MUSER.
Abstract: Consultation is a method widely used in computer centers for helping people to use unfamiliar computer systems or languages. Unfortunately, consultants are scarce, expensive, and often unavailable when needed. This paper describes the implementation of an automated consultant for MACSYMA, called the Advisor. The Advisor's implementation is based on the assumption that the MACSYMA novice behaves in accordance with a standard heuristic problem solving algorithm, called MUSER. This assumption is supported by a body of data on the problem solving behavior of MACSYMA users. In solving his problem, the user implicitly generates a goal-subgoal graph, called a "plan". The plan is a direct proof that the commands used actually achieve the goal (in terms of the user's beliefs), and the problem solving algorithm constitutes a grammar for such plans. The key to the consultation process is the analysis of the user's plan. First, the plan is reconstructed by a combined process of dialogue and automatic plan recognition, and then the underlying beliefs are checked for errors. Because of the MUSER model, the Advisor is able to diagnose not only standard errors but also more general misconceptions.

Journal ArticleDOI
TL;DR: This paper presents a single machine scheduling problem with sequence dependent changeover times, an optimizing solution procedure and various appropriate heuristics are reviewed, and it is demonstrated that the best heuristic for the static problem is not necessarily thebest heuristic in the dynamic situation.
Abstract: This paper presents a single machine scheduling problem with sequence dependent changeover times. An optimizing solution procedure and various appropriate heuristics are reviewed. We then go on to consider the performance of these and other heuristics in the dynamic situation, as new jobs arrive to be processed and have to be added into the existing schedule at some time. Clearly an ideal solution would be to reschedule as each new job arrived, but as this is not generally practical from a computational viewpoint, it has to be carried out less frequently. The actual frequency of this rescheduling is clearly of importance, and some of the heuristics are more adaptable to this than others. Some results are presented which attempt to quantify this adaptability for the heuristics in question, and it is demonstrated that the best heuristic for the static problem is not necessarily the best heuristic in the dynamic situation.

Journal Article
TL;DR: The authors find that equilibrium methods are easy to incorporate into existing computer packages, require minimal user intervention, are stable in heavily-congested networks and avoid extreme misfits between predicted and observed flows.
Abstract: The paper is in three sections: (1) the convergence of stochastic methods. The convergence of stochastic methods of road assignment such as BURRELL or DIAL are investigated in capacity-restrained networks where the cost of travel on any link depends on the flow of that link. A convergent solution is one where the costs assumed by the route-finding algorithm are identical to those corresponding to the resulting link flows. Both all-or-nothing and dial assignments are shown to be inherently non-convergent whereas under certain, quite reasonable, conditions the hypothesis underpinning BURRELL assignment should lead to convergent solutions. However, simplications made by BURRELL in the method of solution for a large network may make the point of convergence extremely difficult, if not impossible, to find. The authors conclude that the instability of these methods, reflected by severe oscillations in flows between interations, is generally an unaboidable feature of such models. (2) Equilibrium methods. This section reviews the equilibrium methods of assignment to capacity-restraint road networks, methods which are well-known theoretically but have not been used extensively in practice. Their major advantage over heuristic techniques is that they guarantee ultimate convergence to a solution which satisfies Wardrop's first principle of equal and minimum travel costs on all routes used. The principles of equilibrium assignment and a straightforward method based on iterative loading are presented. Results obtained by applying this method to a network of Leeds give better goodness of fit statistics with observed counts than conventional methods requiring comparable cpu times. In addition, the authors find that equilibrium methods are easy to incorporate into existing computer packages, require minimal user intervention, are stable in heavily-congested networks and avoid extreme misfits between predicted and observed flows. (3) Improved equilibrium methods. Two methods of improved solutions to equilibrium assignment models are described. The first, "quantal loading", is in fact based on a technique first used in Chicago over 20 years ago in which updates of the link times or costs are carried out at regular intervals within a single assignment rather than at the end. An extension which applies the same ideas as part of a series of iterative equilibrium assignments is developed. The major advantage of quantal loading is that it greatly accelerates the rate of convergence to Wardrop equilibrium, on a network of Leeds by a factor of 5 on the first iteration and factors of 2 thereafter. The potential reductions in cpu times are therefore considerable. The second technique is based on an improved method of combining or averaging different sets of link flows, again with the objective of accelerating convergences and reducing cpu. The results are less spectacular, but do show that the

01 May 1979
TL;DR: A heuristic iterative procedure is proposed and tested for finding a periodic review schedule to minimize inventory and setup costs in a multistage inventory system.
Abstract: : This paper considers the lot-sizing problem in a multistage inventory system. External demand may occur at any stage, and is assumed to be known over a finite horizon. A heuristic iterative procedure is proposed and tested for finding a periodic review schedule to minimize inventory and setup costs. (Author)

Journal ArticleDOI
TL;DR: This approach of using the flexibility of a branch-and-bound formulation, along with specialized heuristics to limit the search, can provide a practical compromise between a “pure” search for an exact optimum and a "no search" use of a heuristic alone.
Abstract: We present an adaptation of a branch-and-bound solution approach for the problem of expanding the capacity of a telephone feeder cable network to meet growing demand. We drastically trim the search by generating “heuristic” bounds, based on “analytic” solutions of simpler capacity expansion problems. Although we have thus sacrificed a guarantee of exact optimality, we obtain very good solutions with far less computation than would otherwise be possible. Computational efficiency is important because the algorithm is implemented in a computer program that is used routinely, both in “batch” and “time-share” versions, by telephone company planners and engineers. We believe that this approach of using the flexibility of a branch-and-bound formulation, along with specialized heuristics to limit the search, can provide a practical compromise between a “pure” search for an exact optimum and a “no search” use of a heuristic alone

Journal ArticleDOI
TL;DR: A planning technique based on the selection, construction and quantification of particular types of scenario (or sets of heuristic hypotheses about the future) for examining future possibilities for energy demand and energy supply.

Journal ArticleDOI
TL;DR: A detailed example of how the complexity measure is used in evaluating the power of heuristic rules used in assessing the performance of the Quasi-Optimizer (QO), a program currently under development is given.
Abstract: The power of certain heuristic rules is indicated by the relative reduction in the complexity of computations carried out, due to the use of the heuristics. A concept of complexity is needed to evaluate the performance of programs as they operate with a varying set of heuristic rules in use. We present such a complexity measure, which we have found useful for the quantitative evaluation of strategies in the context of a long-term research project on poker. We define our measure in terms of the level-complexity for decision trees and study the measure for the relevant class of decision trees with a fixed but arbitrary number of levels, h, and k leaves (all at the last level). We determine the best and worst case distributions for the levels in this measure. We show, for instance, that for the smallest value, Lh(k), and the largest value, Uh(k), which the level-complexity for such trees can have, limk→∞ Lh(k)/logh(k) = 1 and limk→∞ Uh(k)/log(k) = 1. We show also that the level-entropy assumes its maximum value of log(k) just when the path-entropy reaches its minimum value over all trees with k leaves but, in general, the values of either measure can vary in a rather unrelated manner. We give a detailed example of how the complexity measure is used in evaluating the power of heuristic rules used in assessing the performance of the Quasi-Optimizer (QO), a program currently under development. The objectives of the QO are explained, and the three main phases of operation of the program are described within the framework of the poker project.

Journal ArticleDOI
TL;DR: A heuristic method for solving the optimal network problem is proposed and shown to yield high quality results and the concept of forced moves is introduced.

Journal ArticleDOI
TL;DR: The overall results suggest that the method and procedure used may provide an effective means of computing schedules for resource-constrained projects, and is much less sensitive to problem size than the branch-and-bound algorithm.
Abstract: The purpose of this paper is to evaluate a heuristic procedure for solving the resource-constrained project scheduling problem. Projects that involve completing a sequence of well defined activities, some of which can be done in parallel, often are characterized by capacity limitations on some resources. This means schedules generated using standard critical path methods often are not feasible. Resource leveling procedures have been developed to improve the schedule. Formulating the problem with explicit resource constraints is another, more direct, way to accommodate limited capacities. Solution procedures to this problem include simple, single-pass heuristics and branch-and-bound optimization procedures. The former are easy to apply but may yield poor results. The latter provide optimal results but require a substantial amount of computation even for small problems. The procedure evaluated here is representative of a class of multi-pass procedures based on problem decomposition. A brief outline of the h...

Journal ArticleDOI
TL;DR: Optimal and Heuristic bounds are given for the optimal location to the Weber problem when the locations of demand points are not deterministic but may be within given circles.
Abstract: Optimal and Heuristic bounds are given for the optimal location to the Weber problem when the locations of demand points are not deterministic but may be within given circles. Rectilinear, Euclidean and square Euclidean types of distance measure are discussed. The exact shape of all possible optimal points is given in the rectilinear and square Euclidean cases. A heuristic method for the computation of the region of possible optimal points is developed in the case of Euclidean distance problem. The maximal distance between a possible optimal point and the deterministic solution is also computed heuristically.


Journal ArticleDOI
A.D. Pearman1
TL;DR: It is argued that both heuristic and non-heuristic algorithms for the road network optimisation problem would benefit from a greater understanding of the structure of the set of feasible solutions to such problems.
Abstract: This paper argues that both heuristic and non-heuristic algorithms for the road network optimisation problem would benefit from a greater understanding of the structure of the set of feasible solutions to such problems. In order to provide this, a comparative study of a number of spatial combinatorial problems was undertaken. The results show that the road network optimisation problem is rich in good sub-optimal solutions. The implications of this finding for the development of optimising and heuristic algorithms are discussed, and some suggestions made as to where future research on network optimisation problems could most fruitfully be directed.

Journal ArticleDOI
TL;DR: A model was developed which selects the most profitable set of attributes to be inverted and it is shown that the properties can be used for deriving heuristics to simplify the solution of specific problems.