scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 1982"


Proceedings ArticleDOI
01 Jan 1982
TL;DR: An iterative mincut heuristic for partitioning networks is presented whose worst case computation time, per pass, grows linearly with the size of the network.
Abstract: An iterative mincut heuristic for partitioning networks is presented whose worst case computation time, per pass, grows linearly with the size of the network. In practice, only a very small number of passes are typically needed, leading to a fast approximation algorithm for mincut partitioning. To deal with cells of various sizes, the algorithm progresses by moving one cell at a time between the blocks of the partition while maintaining a desired balance based on the size of the blocks rather than the number of cells per block. Efficient data structures are used to avoid unnecessary searching for the best cell to move and to minimize unnecessary updating of cells affected by each move.

2,463 citations


Journal ArticleDOI
TL;DR: Computational results indicate that the procedures provide cost-effective optimal solutions for small problems and good heuristic solutions for larger problems, while simultaneously taking into account a variety of constraint types.
Abstract: This paper introduces methods for formulating and solving a general class of nonpreemptive resource-constrained project scheduling problems in which the duration of each job is a function of the resources committed to it. The approach is broad enough to permit the evaluation of numerous time or resource-based objective functions, while simultaneously taking into account a variety of constraint types. Typical of the objective functions permitted are minimize project duration, minimize project cost given performance payments and penalties, and minimize the consumption of a critical resource. Resources which may be considered include those which are limited on a period-to-period basis such as skilled labor, as well as those such as money, which are consumed and constrained over the life of the project. At the planning stage the user of this approach is permitted to identify several alternative ways, or modes, of accomplishing each job in the project. Each mode may have a different duration, reflecting the magnitude and mix of the resources allocated to it. At the scheduling phase, the procedure derives a solution which specifies how each job should be performed, that is, which mode should be selected, and when each mode should be scheduled. In order to make the presentation concrete, this paper focuses on two problems: given multiple resource restrictions, minimize project completion time, and minimize project cost. The latter problem is also known as the resource-constrained time-cost tradeoff problem. Computational results indicate that the procedures provide cost-effective optimal solutions for small problems and good heuristic solutions for larger problems. The programmed solution algorithms are relatively simple and require only modest computing facilities, which permits them to be potentially useful scheduling tools for organizations having small computer systems.

454 citations


Journal ArticleDOI
TL;DR: A categorization process based on two powerful project summary measures is provided, and it is shown that a rule introduced by this research performs significantly better on most categories of projects.
Abstract: Application of heuristic solution procedures to the practical problem of project scheduling has previously been studied by numerous researchers. However, there is little consensus about their findings, and the practicing manager is currently at a loss as to which scheduling rule to use. Furthermore, since no categorization process was developed, it is assumed that once a rule is selected it must be used throughout the whole project. This research breaks away from this tradition by providing a categorization process based on two powerful project summary measures. The first measure identifies the location of the peak of total resource requirements and the second measure identifies the rate of utilization of each resource type. The performance of the rules are classified according to values of these two measures, and it is shown that a rule introduced by this research performs significantly better on most categories of projects.

300 citations


Book ChapterDOI
01 Apr 1982
TL;DR: In this article, the authors focus on how one learns both types of rules from experience, and how they are tested and maintained in the face of experience under what conditions do we fail to learn about the biases and mistakes that can result from their use?
Abstract: Current work in decision-making research has clearly shifted from representing choice processes via normative models (and modifications thereof) to an emphasis on heuristic processes developed within the general framework of cognitive psychology and theories of information processing (Payne, 1980; Russo, 1977; Simon, 1978; Slovic, Fischhoff, & Lichtenstein, 1977; Tversky & Kahneman 1974, 1, 1980). The shift in emphasis from questions about how well people perform to how they perform is certainly important (e.g., Hogarth, 1975). However, the usefulness of studying both questions together is nowhere more evident than in the study of heuristic rules and strategies. The reason for this is that the comparison of heuristic and normative rules allows one to examine discrepancies between actual and optimal behavior, which then raises questions regarding why such discrepancies exist. In this chapter, I focus on how one learns both types of rules from experience. The concern with learning from experience raises a number of issues that have not been adequately addressed; for example, Under what conditions are heuristics learned? How are they tested and maintained in the face of experience? Under what conditions do we fail to learn about the biases and mistakes that can result from their use?

241 citations


01 Jan 1982
TL;DR: This dissertation argues that examining more closely the way animate systems cope with real-world environments can provide valuable insights about the structural requirements for intelligent behavior.
Abstract: As research in artificial intelligence focuses on increasingly complex task domains, a key question to be resolved is how to design a system that can efficiently acquire knowledge and gracefully adapt its behavior in an uncertain environment. This dissertation argues that examining more closely the way animate systems cope with real-world environments can provide valuable insights about the structural requirements for intelligent behavior. Accordingly, a class of simulated environments is designed that embodies many of the important functional properties characteristic of natural environments. A new type of adaptive system is then defined that uses pattern-directed, rule-based processing to cope with uncertain information. As a rule-based system, the system presented here is notable in that several rules can be active at once and there are no fixed priorities determining the order in which rules can be activated. Moreover, the syntax of each rule is simple enough to make a powerful learning heuristic applicable--one that is provably more efficient than the techniques used in most other adaptive rule-based systems. A simple version of the adaptive system is implemented as a hypothetical organism having to locate resources and avoid noxious stimuli by generating temporal sequences of actions in a simulated environment. Simulation results show that the naive organism quickly acquires the knowledge required to function effectively. Further experiments show that the system is capable of discriminating a large class of schematic patterns; and, that prior learning experiences transfer to novel situations. The results presented here demonstrate that activity in a collection of simple computational elements--operating in parallel and activated stochastically--can be orchestrated to produce reliable behavior in a challenging environment. The system touches on several issues related to cognitive functioning such as the generic representation of objects and the management of limited processing resources. These issues have been addressed in a way that is computationally feasible and that allows for rigorous testing.

217 citations


Journal ArticleDOI
TL;DR: In this article, a heuristic lot-sizing technique for multi-stage material requirements planning systems is proposed, where the problem is addressed in the context of a single stage.
Abstract: Most of the recent studies of heuristic lot-sizing techniques for multi-stage material requirements planning systems have investigated the problem in the context of a single stage. In this paper, t...

206 citations



Journal ArticleDOI
TL;DR: Three extensions of the A* search algorithm are introduced which improve the search efficiency by relaxing the admissibility condition and are shown to be significant in difficult problems, i.e., problems requiring a large number of expansions due to the presence of many subtours of roughly equal costs.
Abstract: The paper introduces three extensions of the A* search algorithm which improve the search efficiency by relaxing the admissibility condition. 1) A* employs an admissible heuristic function but invokes quicker termination conditions while still guaranteeing that the cost of the solution found will not exceed the optimal cost by a factor greater than 1 + . 2) R?* may employ heuristic functions which occasionally violate the admissibility condition, but guarantees that at termination the risk of missing the opportunity for further cost reduction is at most ?. 3) R?*,* is a speedup version of R?*, combining the termination condition of A* with the risk-admissibility condition of R?*. The Traveling Salesman problem was used as a test vehicle to examine the performances of the algorithms A* and R?*. The advantages of A* are shown to be significant in difficult problems, i.e., problems requiring a large number of expansions due to the presence of many subtours of roughly equal costs. The use of R?* is shown to produce a 4:1 reduction in search time with only a minor increase in final solution cost.

188 citations


Journal ArticleDOI
TL;DR: An approach to spatial analysis which is more closely tailored to archaeological objectives and archaeological data than are more "traditional" quantitative techniques such as nearest neighbor analysis is discussed.
Abstract: This article discusses an approach to spatial analysis which is more closely tailored to archaeological objectives and archaeological data than are more "traditional" quantitative techniques such as nearest neighbor analysis. Heuristic methods, methods which make use of the problem context and which are guided in part by intuitively derived "rules," are discussed in general and with reference to the problem of spatial analysis in archaeology. A preliminary implementation of such a method is described and applied to artificial settlement data and artifact distributions from the Magdalenian camp of Pincevent. Finally, the prospects for further development of heuristic methods are elaborated. SPATIAL ANALYSIS MAY BE SEEN as a process of searching for theoretically meaningful patterns in spatial data. Of course, this problem has been approached by archaeologists in several ways. The most obvious method of spatial analysis is the visual examination of a point distribution on a map with relevant background information in mind. This intuitive approach has been forsaken (and berated) by many archaeologists with greater aspirations to rigor, in favor of quantitative techniques of spatial analysis, such as nearest neighbor analysis. These techniques generally yield a summary statistic which attempts to characterize the spatial pattern with a single number and perhaps test its significance. The summary statistic is commonly compared from period to period or from area to area. This article reports the progress of an experiment in an alternative approach to the analysis of spatial patterns. This approach, the heuristic approach, is synthetic in that it attempts to open the way for the use of contextual knowledge and human expertise within a formal (computerexecuted) procedure for aiding human-directed spatial analysis. This presentation starts with a brief review of "traditional" quantitative approaches to spatial analysis. It is followed by a discussion of heuristic approaches to problem solving and their application to spatial analysis in archaeology. In the next section, heuristic procedures that have been developed are applied to artificial data sets and then to an analysis of actual data from the Magdalenian camp of Pincevent. The article closes with a discussion of the conclusions of this experiment and prospects for further development.

174 citations


Journal ArticleDOI
TL;DR: Approximation algorithms are given where the solutions are achieved with heuristic search methods and test results are presented to support the feasibility of the methods.
Abstract: The following two-dimensional bin packing problems are considered. (1) Find a way to pack an arbitrary collection of rectangular pieces into an open-ended, rectangular bin so as to minimize the height to which the pieces fill the bin. (2) Given rectangular sheets, i.e. bins of fixed width and height, allocate the rectangular pieces, the object being to obtain an arrangement that minimizes the number of sheets needed. Approximation algorithms are given where the solutions are achieved with heuristic search methods. Test results are presented to support the feasibility of the methods.

132 citations



Journal ArticleDOI
TL;DR: In this paper, a heuristic rule is proposed to reduce the number of individual MILPs that have to be solved and thus reduce the total computational effort, assuming that if the same values of the integer variables are optimal at two different values of a change parameter, these same integer variable values will also be optimal at all intermediate parameter values.
Abstract: A method is developed for carrying out parametric analysis on a mixed integer linear program (MILP) as either objective function coefficients or right-hand-side values of the constraints are varied continuously. The method involves solving MILPs at point values of the parameters of variation and joining the results by LP parametric analysis. The procedure for parametric analysis on the objective function can be continued until a theoretical result proves that the analysis is complete. However, a heuristic rule that is presented may greatly reduce the number of individual MILPs that have to be solved and thus reduce the total computational effort. The rule assumes that if the same values of the integer variables are optimal at two different values of the change parameter, these same integer variable values will also be optimal at all intermediate parameter values. If the rule is applied, a complete parametric analysis requires solving 2n different MILPs, where n is the number of different sets of optimal i...

Journal ArticleDOI
TL;DR: In this paper, a lower bound on the number of cuts needed for termination is derived for the quadratic assignment problem, and several heuristics which are derived from the cutting planes produce optimal or good quality solutions early on in the search process.
Abstract: This paper uses the formulation of the quadratic assignment problem as that of minimizing a concave quadratic function over the assignment polytope. Cutting plane procedures are investigated for solving this problem. A lower bound derived on the number of cuts needed for termination indicates that conventional cutting plane procedures would require a huge computational effort for the exact solution of the quadratic assignment problems. However, several heuristics which are derived from the cutting planes produce optimal or good quality solutions early on in the search process. An illustrative example and computational results are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors deal with the minimization of an objective function which incorporates two conflicting criteria; cost minimization and closeness rating maximization, for facilities design problems, and a heuristic approach is developed which takes an initial layout and improves it step by step using a pairwise exchange routine.
Abstract: This paper deals with the minimization of an objective function which incorporates two conflicting criteria; cost minimization and closeness rating maximization, for facilities design problems. The objective function represents the difference of materials handling cost and the closeness rating with predefined weights assigned to both criteria. A heuristic approach is developed which takes an initial layout and improves it step by step using a pairwise exchange routine.

Journal ArticleDOI
TL;DR: The problem definition and solution procedure overcome the problem of severe nonlinearity of inter-department movement times relative to distance, which enter the multi-floor problem because of the indirectness of routing, and because of different movement speeds compared with the single floor case.
Abstract: A descriptive problem definition and a tested computerized heuristic solution procedure are offered for the problem of relative location of facilities in or layout of a multi-floor building. The problem definition and solution procedure overcome the problem of severe nonlinearity of inter-department movement times relative to distance, which enter the multi-floor problem because of the indirectness of routing, and because of different movement speeds compared with the single floor case. Both the definition and solution procedure are application-oriented, concentrating on practical aspects of multi-floor space allocation. Substantial savings in an implementation are reported. The procedure is particularly relevant to planning for an organization moving into a new multi-floor building. Additionally, the procedure can be used, without modification, for an organization spread over more than one building, and for single floor facilities. The possibility of alternative inter-floor routings is incorporated.

Journal ArticleDOI
TL;DR: The results indicate that, under certain conditions, the computationally simple Silver-Meal heuristic provides lower lot-sizing costs than the Wagner-Whitin algorithm.

Journal ArticleDOI
TL;DR: A network-based optimizing approach to the classroom/time model which rapidly approximates the solutions is devised which combines the insight of the scheduler with combinatorial and searching ability of a computer via a transshipment optimization network model.

01 Jan 1982
TL;DR: In this article, an M!M/1 queue with fixed arrival rate and controllable service rate is considered, where the objective is to minimize the expected long-run average of a cost rate, which is a sum of two functions, associated with the queue length and the service rate, respectively.
Abstract: This thesis consists of three parts. In the first one, optimal policies are constructed for some singe-line queueing situations. The second part deals with finite-state Markovian decision processes, and in the third part the practical modelling of a more complex problem is discussed and exemplified.The central control object of part I is an M!M/1 queue with fixed arrival rate and controllable service rate. The objective is to minimize the expected long-run average of a cost rate, which isa sum of two functions, associated with the queue length (the holding cost) and the service rate (the service cost), respectively. For the case of a fin ite waiting-room, terminal costs are constructed, such that a solution to the associated dynamic programming (Bellman) equation exists, which is affine in the time parameter. The corresponding optimal control is independent of both time and the length of the control interval. It hasa form which is subsequently used in generali zing into the case of an infinite waiting room. For this case, the analysis res ults in an efficient algorithm, and in several structural results. Assuming essentially only that the holding cost is increasing, it is proved that a monotone optimal policy exists, i.e. that the optimal choice of service rate is an in creasing function of the present queue length. Three variations of the ce ntral problem are also treated in part I. These are the M/M/c problem (for which the above monotonicity result holds only under a stronger condition), the problem of a controllable ar rival rate (with fixed service rate), and the discounted cost problem.In part II, finite-state Markovian decision processes are discussed. A brief and heuristic introduction is given, regarding continuous-time Markov chains, cost structures on these, and the problem of constructing an optimal poli cy. The purpose is to point out the relations to the queueing control problem with finite waiting-room. Counterexamples demonstrate that the approach of part I is not universally applicable.In part 111, a simplified mode! is discussed for a situation where th e customers may reenter the queue after a stochastic delay. It is argued that under heavy-traffic conditions, the influx of reentering customers can be approximated with the output of a linear stochastic system with state-dependent Gaussian noise, whose dynamics depend on the delay distribution. This idea is exemplified with the res ults from a simulated experiment on a telephone station.

Journal ArticleDOI
TL;DR: The paper presents computational experience with the algorithm for a variety of randomly generated test problems (up to 50 jobs and 50 machines in size), and compares its performance with other published heuristic techniques.
Abstract: This paper describes a heuristic procedure for sequencing the n job m machine static flowshop. Basically, the procedure is performed in two overall steps. In the first step, each of the n jobs is tested as a potential immediate follower to each of the other jobs. In effect, this step of the procedure asks the question, ‘how well does a particular job fit in terms of job blocking or machine idleness if it were to follow some other job?’ An overall figure of merit, or cost cij, is determined for each job j as a follower to another job i. Six different heuristics are presented for determining sets of cij values. Using these values of cij, the second step then heuristically develops a job sequence by solving the travelling salesman problem. The paper also presents computational experience with the algorithm for a variety of randomly generated test problems (up to 50 jobs and 50 machines in size), and compares its performance with other published heuristic techniques.

Proceedings ArticleDOI
01 Dec 1982
TL;DR: In this paper, mean value analysis is used to derive approximate algorithms for networks with general servers and FIFO service rule, and the corresponding heuristic technique is described and validated by means of simulation.
Abstract: The application of queuing network theory to the performance evaluation of flexible manufacturing systems (FMS) usually requires the assumption of exponential processing times on machines. It has resulted in pessimistic, although qualitatively robust, evaluation algorithms so far. A recent computational technique, named mean-value analysis, allows to depart from this unrealistic assumption and to derive approximate algorithms for networks with general servers and FIFO service rule. The case of deterministic service times is particularly relevant for FMS's, and the corresponding heuristic technique is described and validated by means of simulation. This technique is liable to extend to the case of unreliable deterministic servers, thus enabling to assess the overall effect of short incidents on machines, on the production rate of the system.

Journal ArticleDOI
TL;DR: The paper presents a heuristic solution technique for an extended vehicle scheduling problem that includes the aspects of multiple use of vehicles and the possibility of postponing some of the deliveries.

Journal ArticleDOI
TL;DR: An heuristic approach to the solution of the quadratic assignment problem is presented in this paper, where a simple procedure is used to get a good feasible starting point, then the problem is solved as a nonlinear program (ignoring the integrality conditions) using MINOS, and lastly the near integer solution is converted into an integer feasible solution using an heuristic procedure.

Proceedings ArticleDOI
30 Aug 1982
TL;DR: An approximate analytical method for estimating performance statistics of general closed queueing network models of computing systems is presented, based on the aggregation theorem (Norton's theorem) of Chandy, Herzog and Woo.
Abstract: An approximate analytical method for estimating performance statistics of general closed queueing network models of computing systems is presented. These networks may include queues with priority scheduling disciplines and non-exponential servers and several classes of jobs. The method is based on the aggregation theorem (Norton's theorem) of Chandy, Herzog and Woo.

Journal ArticleDOI
TL;DR: This work proposes a primal subgradient algorithm to solve the well-known strong linear programming relaxation of the problem, and shows that an optimal solution is discovered with high frequency.
Abstract: The most successful algorithms for solving simple plant location problems are presently dual-based procedures. However, primal procedures have distinct practical advantages (e.g., in sensitivity analysis). We propose a primal subgradient algorithm to solve the well-known strong linear programming relaxation of the problem. Typically this algorithm converges very fast to a point whose objective value is close to the integer optimum and where most of the decision variables have been fixed either to 0 or to 1. To fix the values of the remaining variables we use a greedy-interchange algorithm. Thus we propose thiss approach as a heuristic. Computational experience shows that an optimal solution is discovered with high frequency.

Book ChapterDOI
01 Jan 1982
TL;DR: Relatively little attention has been paid to the solution of the assignment problem in a dynamic framework, which means that demand structure and trip costs are varying over time.
Abstract: Traffic assignment is one of the most important steps in the mathematical theory of traffic flow and there is a lot of literature dealing with this subject (confer to Potts and Oliver (1972) or Florian (1976)). But almost all the papers have in common that only a static version of the assignment or the traffic equilibrium problem is treated. This means that there is always the assumption in the models that there are no changes in the structure of the network during the trip from an origin to a destination. Alternatively one can also say that there is no time dependancy for the trips considered. Relatively little attention has been paid to the solution of the assignment problem in a dynamic framework, which means that demand structure and trip costs are varying over time. There are basically two models which consider dynamic assignment problems. One is by Yagar (1976) which gives heuristic principles for a dynamic assignment by an “emulation technique”. This technique uses a dynamic demand structure, but there is no explicit dynamic flow model. The other model is by Merchant and Nemhauser (1978 a,b), who formulated a dynamic flow model and also presented an algorithm which obtains system- optimized flows. Perhaps one should mention in this context also the work of Maher and Akcelik (1977) about route control. Although there is no dynamic assignment model presented there the authors try to connect traffic assignment and dynamic flow structure by a combination of incremental loading and simulation in order to evaluate different strategies of traffic control. This is of interest because in urban networks traffic control strongly influences the flow dynamic and therefore also route choice as it is shown in an example given by Smith (1979).

Journal ArticleDOI
TL;DR: A subtheory of human intelligence based on the component construct is sketches a system of interrelations among the various kinds of components, and the functions of components in human intelligence are assessed by considering how the proposed sub theory can account for various empirical phenomena in the literature on human intelligence.
Abstract: of the original article: This article sketches a subtheory of human intelligence based on the component construct. Components differ in their levels of generality and in their functions. Metacomponents are higher-order control processes used for planning how a problem should be solved, for making decisions regarding alternative courses of action during problem solving, and for monitoring solution processes. Performance components are processes used in the execution of a problemsolving strategy. Acquisition components are processes used in learning new information. Retention components are processes used in retrieving previously stored knowledge. Transfer components are used in generalization, that is, in carrying over knowledge from one task or task context to another. A mechanism for the interaction among components of different kinds and multiple components of the same kind can account for certain interesting aspects of laboratory and everyday problem solving. A brief historical overview of alternative basic units for understanding intelligence is followed by a detailed description of one of these units, the component, and by a differentiation among various kinds of components. Examples of each kind of component are given, and the use of each of these components in a problem-solving situation is illustrated. Then, a system of interrelations among the various kinds of components is described. Finally, the functions of components in human intelligence are assessed by considering how the proposed subtheory can account for various empirical phenomena in the literature on human intelligence. A heuristic for componential analysis: \"Try old goals\

Journal ArticleDOI
TL;DR: In this paper, a simple transportation formulation is presented which permits a direct computation of minimum margin for investor's option accounts and shows considerable savings when compared with existing heuristic procedures, which are shown to be inefficient and suboptimal.
Abstract: The calculation of margin for investor's option accounts is a complex and costly problem for brokerage houses. The existing procedures usually involve a heuristic requiring sequential computations. These are shown to be inefficient and suboptimal. A simple transportation formulation is presented which permits a direct computation of minimum margin and shows considerable savings when compared with existing heuristic procedures.

Journal ArticleDOI
TL;DR: A heuristic method is used to solve the vehicle scheduling problem by maintaining local optimality whilst approaching the feasible region and giving results comparable with the best published algorithms.
Abstract: A heuristic method is used to solve the vehicle scheduling problem by maintaining local optimality whilst approaching the feasible region. Tests with published problems show that the technique gives results comparable with the best published algorithms. The practical requirements of real life scheduling are discussed, and the flexibility of the technique is demonstrated for a complex problem involving weekly cyclical deliveries.

Journal ArticleDOI
TL;DR: A heuristic, "unstructured" weight determination procedure was developed for harvest scheduling models in which goal programming algorithms are employed and contributes to improved forest management decisions by providing the optimal harvest scheduling plan under each of the four goals individually.
Abstract: A heuristic, "unstructured" weight determination procedure was developed for harvest scheduling models in which goal programming algorithms are employed. Six noninferior solution sets that included...

Journal ArticleDOI
TL;DR: This paper provides a much simpler mathematical formulation of the multi-level lot sizing problem than found in the literature and was advantageously used to determine optimum solutions using IBM's Mathematical Programming System Extended (MPSX) to the aforementioned problems.