scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 1988"


Journal ArticleDOI
TL;DR: This paper finds that the pivot algorithm is extraordinarily efficient: one “effectively independent” sample can be produced in a computer time of orderN, and presents a rigorous proof of ergodicity and numerical results on self-avoiding walks in two and three dimensions.
Abstract: The pivot algorithm is a dynamic Monte Carlo algorithm, first invented by Lal, which generates self-avoiding walks (SAWs) in a canonical (fixed-N) ensemble with free endpoints (hereN is the number of steps in the walk). We find that the pivot algorithm is extraordinarily efficient: one “effectively independent” sample can be produced in a computer time of orderN. This paper is a comprehensive study of the pivot algorithm, including: a heuristic and numerical analysis of the acceptance fraction and autocorrelation time; an exact analysis of the pivot algorithm for ordinary random walk; a discussion of data structures and computational complexity; a rigorous proof of ergodicity; and numerical results on self-avoiding walks in two and three dimensions. Our estimates for critical exponents areυ=0.7496±0.0007 ind=2 andυ= 0.592±0.003 ind=3 (95% confidence limits), based on SAWs of lengths 200⩽N⩽10000 and 200⩽N⩽ 3000, respectively.

793 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive model framework for linking decisions and performance throughout the material-production-distribution supply chain to support analysis of alternative manufacturing material/service strategies.
Abstract: This paper presents a comprehensive model framework for linking decisions and performance throughout the material-production-distribution supply chain. The purpose of the model is to support analysis of alternative manufacturing material/service strategies. A series of linked, approximate submodels and an heuristic optimization procedure are introduced. A prototype software implementation is also discussed.

659 citations


Book
15 Sep 1988
TL;DR: In this paper, the authors present a hierarchical planning as a hierarchy of different abstraction levels for SIPE and compare it with other systems with different resources: Reusable, Consumable, Temporal, Search, and Reactivity.
Abstract: 1 Reasoning about Actions and Planning 2 Basic Assumptions and Limitations 3 SIPE and Its Representations 4 Hierarchical Planning as Differing Abstraction Levels 5 Constraints 6. The Truth Criterion 7 Deductive Causal Theories 8 Plan Critics 9 Resources: Reusable, Consumable, Temporal 10 Search 11 Replanning During Execution 12 Planning and Reactivity 13 Achieving Heuristic Adequacy 14 Comparison with Other Systems

551 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposed heuristic algorithm in finding a minimum makespan schedule is empirically evaluated and found to increase with the increase in the number of jobs.
Abstract: This paper describes the two-stage flowshop problem when there are identical multiple machines at each stage, and shows that the problem is NP-complete. An efficient heuristic algorithm is developed for finding an approximate solution of a special case when there is only one machine at stage 2. The effectiveness of the proposed heuristic algorithm in finding a minimum makespan schedule is empirically evaluated and found to increase with the increase in the number of jobs.

525 citations


Journal ArticleDOI
TL;DR: In this article, the authors systematically study the performance behavior of beam search with other heuristic methods for scheduling, and the effects of using different evaluation functions to guide the search, and develop a new variation of beam searching, called filtered beam search, which is computationally simple yet produces high quality solutions.
Abstract: Beam search is a technique for searching decision trees, particularly where the solution space is vast. The technique involves systematically developing a small number of solutions in parallel so as to attempt to maximize the probability of finding a good solution with minimal search effort. In this paper, we systematically study the performance behaviour of beam search with other heuristic methods for scheduling, and the effects of using different evaluation functions to guide the search. We also develop a new variation of beam search, called filtered beam search which is computationally simple yet produces high quality solutions.

354 citations


Journal ArticleDOI
TL;DR: The problem of allocating the data of a database to the sites of a communication network is investigated and a model that makes it possible to compare the cost of allocations is presented.
Abstract: The problem of allocating the data of a database to the sites of a communication network is investigated. This problem deviates from the well-known file allocation problem in several aspects. First, the objects to be allocated are not known a priori; second, these objects are accessed by schedules that contain transmissions between objects to produce the result. A model that makes it possible to compare the cost of allocations is presented; the cost can be computed for different cost functions and for processing schedules produced by arbitrary query processing algorithms.For minimizing the total transmission cost, a method is proposed to determine the fragments to be allocated from the relations in the conceptual schema and the queries and updates executed by the users.For the same cost function, the complexity of the data allocation problem is investigated. Methods for obtaining optimal and heuristic solutions under various ways of computing the cost of an allocation are presented and compared.Two different approaches to the allocation management problem are presented and their merits are discussed.

285 citations


Journal ArticleDOI
Robert J. Wittrock1
TL;DR: An algorithm that schedules the loading of parts into a manufacturing line to minimize the makespan and secondarily to minimize queueing is presented.
Abstract: Consider a manufacturing line that produces parts of several types. Each part must be processed by at most one machine in each of several banks of machines. This paper presents an algorithm that schedules the loading of parts into such a line. The objective is primarily to minimize the makespan and secondarily to minimize queueing. The problem is decomposed into three subproblems and each of these is solved using a fast heuristic. The most challenging subproblem is that of finding a good loading sequence, and this is addressed using workload concepts and an approximation to dynamic programming. We make several extensions to the algorithm in order to handle limited storage capacity, expediting, and reactions to system dynamics. The algorithm was tested by computing schedules for a real production line, and the results are discussed.

180 citations


Journal ArticleDOI
TL;DR: The concept of generalized gradients is proposed to compute the delay sensitivities of the transistor sizing problem and it is shown that the approach is a good compromise between the speed of the heuristic algorithm and the power of mathematical programming.
Abstract: A combined heuristic and mathematical programming approach to transistor sizing is presented. A fast heuristic algorithm is used to obtain an initial sizing of the circuit and convert the transistor sizing problem into a nonlinear optimization problem. The problem is then solved, in spaces of reduced dimensionality, by mathematical programming techniques. To cope with the nondifferentiability of the circuit delays, the concept of generalized gradients is proposed to compute the delay sensitivities. Experiments justify the use of this sensitivity computation technique and show that the approach is a good compromise between the speed of the heuristic algorithm and the power of mathematical programming. >

177 citations


Proceedings ArticleDOI
01 Jun 1988
TL;DR: A novel via minimization approach is presented for two-layer routing of printed-circuit boards and VLSI chips and poses a practical heuristic algorithm that can handle both grid-based and gridless routing.
Abstract: A novel via minimization approach is presented for two-layer routing of printed-circuit boards and VLSI chips. The authors have analyzed and characterized different aspects of the problem and derived an equivalent graph model for the problem from the linear-programming formulation. Based on the analysis of their unified formulation, the authors pose a practical heuristic algorithm. The algorithm can handle both grid-based and gridless routing. Also, an arbitrary number of wires is allowed to intersect at a via, and both Manhattan and knock-knee routings are allowed. >

170 citations


Journal ArticleDOI
TL;DR: A new algorithm is presented for the optimal solution of the 0-1 Knapsack problem, which is particularly effective for large-size problems, and incorporates a new method of computation of upper bounds and efficient implementations of reduction procedures.
Abstract: We present a new algorithm for the optimal solution of the 0-1 Knapsack problem, which is particularly effective for large-size problems. The algorithm is based on determination of an appropriate small subset of items and the solution of the corresponding "core problem": from this we derive a heuristic solution for the original problem which, with high probability, can be proved to be optimal. The algorithm incorporates a new method of computation of upper bounds and efficient implementations of reduction procedures. The corresponding Fortran code is available. We report computational experiments on small-size and large-size random problems, comparing the proposed code with all those available in the literature.

144 citations


Proceedings ArticleDOI
24 Jul 1988
TL;DR: The authors demonstrate that Hopfield-type networks can find reasonable solutions to the traveling salesman problem (TSP) and the optimal list-matching problem (LMP) and show how to avoid the difficulties encountered by G.V. Wilson and G.S. Pawley by using a modified energy functional.
Abstract: The authors demonstrate that Hopfield-type networks can find reasonable solutions to the traveling salesman problem (TSP) and the optimal list-matching problem (LMP). They show how to avoid the difficulties encountered by G.V. Wilson and G.S. Pawley (1988) by using a modified energy functional which yields better solutions to the TSP than J.J. Hopfield and D.W. Tank's (1985) original formulation. In addition, two fixed-parameter networks are described, one for the TSP and the other for the LMP. The performance of the network for the TSP is comparable to the performance of the modified energy functional formulation, while the network for the list-matching problem is shown to perform better than a simple heuristic method. A major feature of these two networks is that the problem-dependent cost data are contained entirely in the linear term of the energy functional-the quadratic part contains only constraint information. This feature has the advantage that all costs can be presented to the network as inputs rather than as connection weights, much reducing hardware complexity. >

Journal ArticleDOI
TL;DR: The Expansion Method, an analytical technique for modeling finite open queueing networks and Powell's unconstrained optimization procedure are integrated in a design methodology, which evaluates alternative line topologies, system throughputs, and their optimal buffer sizes.
Abstract: Automated assembly lines are modeled as finite open queueing networks and a heuristic for buffer space allocation within these lines is presented. The Expansion Method, an analytical technique for modeling finite open queueing networks and Powell's unconstrained optimization procedure are integrated in a design methodology, which evaluates alternative line topologies, system throughputs, and their optimal buffer sizes. The resulting design methodology is demonstrated for series, merging and splitting topologies of automated assembly lines with balanced and unbalanced service rates.

Journal ArticleDOI
TL;DR: The problem of defining the territories for 168 sales agents of a German manufacturer of consumer goods is solved by means of a location-allocation approach involving a standard code of a primal network algorithm as well as a new heuristic for resolving split areas.

Journal ArticleDOI
TL;DR: A heuristic for the traveling salesman location problem on a network that requires On3 time to find the location that "minimizes" the expected distance traveled is presented.
Abstract: In this paper, we present a heuristic for the traveling salesman location problem on a network. Each day the salesman e.g., a repair vehicle must visit all the calls that are registered in a service list. Each call is generated with a given probability and the service list contains at most n calls. The heuristic requires On3 time to find the location that "minimizes" the expected distance traveled. A worst case analysis of the heuristic indicates that it will produce a solution which is at most 50% worse than the optimal solution. The paper also contains several asymptotic results for the problem in the plane.

Journal ArticleDOI
TL;DR: Heuristic algorithms for batching a set of orders such that the total distance travelled by the order picking machine is minimized are presented and their efficiency and validity are illustrated through computer simulation.
Abstract: This paper deals with an order picking problem in an automated storage and retrieval system (AS/RS). We present heuristic algorithms for batching a set of orders such that the total distance travelled by the order picking machine is minimized. These algorithms are based on cluster analysis and their efficiency and validity are illustrated through computer simulation. The results show that the algorithms developed perform substantially better than those from previous studies.

Patent
20 Dec 1988
TL;DR: In this paper, an artificial intelligence module consisting of an inference engine, a memory connected to the inference engine which stores a set of heuristic rules for the artificial intelligence system, and a knowledge base memory is also capable of feeding back network information to the rule base memory which can thus update its rules.
Abstract: A communications system utilizes artificial intelligence to select connectivity paths among various locations in a communications network. An embodiment shown is that of a packet radio network, wherein an artificial intelligence module, located at one or more of the radio sites in the network, applies a set of heuristic rules to a knowledge base obtained from network experience to select connectivity paths through the network. The artificial intelligence module comprises an inference engine, a memory for storing network data obtained from a radio receiver and transmitting it to the inference engine, a memory connected to the inference engine which stores a set of heuristic rules for the artificial intelligence system, and a knowledge base memory which stores network information upon which the inference engine draws. The knowledge base memory is also capable of feeding back network information to the rule base memory, which can thus update its rules. Also shown is an embodiment of a multimedia communications network.

Book ChapterDOI
01 Jun 1988
TL;DR: This paper examines the computational optimality of A*, in the sense of never expanding a node that could be skipped by some other algorithm having access to the same heuristic information that A* uses.
Abstract: This paper examines the computational optimality of A*, in the sense of never expanding a node that could be skipped by some other algorithm having access to the same heuristic information that A* uses. We define four optimality types, and consider three classes of algorithms and four domains of problem instances relative to which computational performances are appraised. For each class-domain combination, we then identify the strongest type of optimality that exists and the algorithm achieving it. Our main results relate to the class of algorithms which, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h≤h*). On this class we show that A* is not optimal and that no optimal algorithm exists, but if we confine the performance tests to cases where the estimates are also consistent, then A* is indeed optimal. Additionally, we show that A* is optimal over a subset of the latter class containing all best-first algorithms that are guided by path-dependent evaluation functions.

Journal ArticleDOI
TL;DR: Two basic components of the knowledge based system, namely the expert system and heuristic clustering algorithm are discussed, which considers alternative process plans and multiple machines for solving the generalized group technology problem.
Abstract: In this paper a knowledge based system (EXGT-S) for solving the generalized group technology problem is presented. The formulation of the group technology problem involves constraints related to machine capacity, material handling system capabilities, machine cell dimensions and technological requirements. Il has been developed for an automated manufacturing system. EXGT-S is based on the tandem system architecture presented in Kusiak (1987). It considers alternative process plans and multiple machines. EXGT-S takes advantage of the developments in expert systems and optimization. Two basic components of the knowledge based system, namely the expert system and heuristic clustering algorithm are discussed. Each partial solution generated by the clustering algorithm is evaluated by the expert system which modifies search directions of the algorithm.

Journal ArticleDOI
TL;DR: It has been determined that a good heuristic that produces near optimal solutions within reasonable limits of percentage deviation is required for solving medium and large size problems.

Patent
Joel L. Wolf1
02 Sep 1988
TL;DR: In this article, the file assignment problem is partitioned into two sequential optimization problems, called the macro model and the micro model, and the output from the optimization is an "optimal" assignment of files to DASDs.
Abstract: A practical mathematical algorithm is used to solve the so-called "File Assignment Problem" (FAP). The FAP is partitioned into two sequential optimization problems, called the macro model and the micro model. The macro model is solved by a Non-Linear Programming Model (NLPM) and a Queuing Network Model (QNM). The NLPM takes as input detailed information on the computer system configuration and performance characteristics down through the DASD level, and, using the QNM as its objective function evaluator, determines the "optimal" DASD relative access rates as output. The micro model is solved by a Binary Linear Programming Model (BLPM), although the QNM is also involved to help determine the BLPM stopping criteria. The input to the micro model consists basically of the output from the macro model, together with statistics on the access rates of the various files in the computer system. The output from the optimization is an "optimal" assignment of files to DASDs. The micro model algorithm can be utilized in either an unlimited file movement mode or a limited file movement mode, the former being used when the computer system undergoes a major reconfiguration while the latter is used on a once per week basis. The BLPM is solved by a "neighborhood escape" type heuristic. The procedure provides a real-world, practical solution to the FAP resulting in significant increases in performance.

Journal ArticleDOI
TL;DR: New heuristic algorithms based on cluster analysis are presented for order processing problem in a man-on-board automated storage and retrieval system (AS/RS) and results indicate that some algorithms developed perform substantially better than the others.

Journal ArticleDOI
TL;DR: A heuristic algorithm for the design of a communications network that consists of a central computer and geographically dispersed locations that communicate with the computer offers the following new features: constraints on the availability and reliability of the network as well as on the expected delay.

Journal ArticleDOI
TL;DR: It is shown that the decision form of this problem is NP-complete, even when the processing times on one machine only are controllable and all the processing cost units are identical.

Journal ArticleDOI
TL;DR: In this article, a solution technique is developed which addresses large problems with various numbers and sizes of vehicles and customers in the network, and an optimal solution is derived using exact algorithms.
Abstract: A solution technique is developed which addresses large problems with various numbers and sizes of vehicles and customers in the network. This vehicle routing problem is a further extension of the Multiple Travelling Salesman problem. Given an heuristic solution, limits are set for each component, the violation of which implies that the component is an illegal subtour. An optimal solution is derived using exact algorithms.

Journal ArticleDOI
TL;DR: It is shown that the CVM3 problem is NP-complete and a heuristic algorithm is proposed and the experimental results show that the proposed algorithm is efficient and generates fairly good solutions.
Abstract: The layer assignment problem for interconnect is the problem of determining which layers should be used for wiring the signal nets so that the number of vias is minimized. The problem is often referred to as the via minimization problem. The problem is considered for three-layer routing, concentrating on one version called the constrained via minimization (CVM3) problem. It is shown that the CVM3 problem is NP-complete and a heuristic algorithm is proposed. The experimental results show that the proposed algorithm is efficient and generates fairly good solutions. >

01 Jan 1988
TL;DR: An efficient annealing schedule is presented which speeds up simulatedAnnealing by a factor of up to twenty-four when compared with general schedules currently available in the literature and compares the method with efficient heuristics on well-studied problems.
Abstract: The popularity of simulated annealing comes from its ability to find close to optimal solutions for NP-hard combinatorial optimization problems. Unfortunately, the method has a major drawback: its massive requirement of computation time. In this dissertation, we present an efficient annealing schedule which speeds up simulated annealing by a factor of up to twenty-four when compared with general schedules currently available in the literature. The efficient annealing schedule, which lowers the temperature at every step and keeps the system in quasi-equilibrium at all times, is derived from a new quasi-equilibrium criterion. For a given move generation strategy and a given number of steps, this schedule is shown to give the minimum final average cost among all schedules that maintain the system in quasi-equilibrium. Furthermore, with the introduction of two models, we derive an alternate form of this schedule that relates move generation to temperature decrement. At every step, the move generation is controlled to minimize the response time of the system to a change of temperature, leading to the largest decrement in average cost while satisfying the quasi-equilibrium criterion. Most of the practical applications of simulated annealing have been in complicated problem domains, where algorithms either did not exist or performed poorly. To assess the performance of simulated annealing as a general method for solving combinatorial optimization problems, we also compare the method with efficient heuristics on well-studied problems: the traveling salesman problem and the graph partition problem. For high quality solutions and for problems with a small number of close to optimal solutions, our test results indicate that simulated annealing out-performs the heuristics by Lin and Kernighan and by Karp for the traveling salesman problem, and multiple executions of the heuristic by Fiduccia and Mattheyses for the graph partition problem.

Journal ArticleDOI
TL;DR: A heuristic is proposed and evaluated, and it is found to give satisfactory performance when applied to a problem with twenty-five nodes and to be placed within the context of multiobjective programming.
Abstract: A generalization of the travelling salesman problem is introduced. Each node has an associated reward, and a penalty is incurred by travelling between nodes. In the multiobjective vending problem, the subset of nodes and associated tour which will minimize penalty and maximize reward is sought. The problem is placed within the context of multiobjective programming. A heuristic is proposed and evaluated, and it is found to give satisfactory performance when applied to a problem with twenty-five nodes. Further generalizations are suggested.

Journal ArticleDOI
TL;DR: In this paper, a non-permutation flow shop scheduling problem is studied, in which the duration of each operation on certain machines is a linear function of the allotted part of a constrained resource (e.g., energy, fuel, catalyzer, oxygen, raw material, money).
Abstract: The paper deals with a version of the general non-permutation flow-shop scheduling problem in which the duration of each operation on certain machines is a linear function of the allotted part of a constrained resource (e.g. energy, fuel, catalyzer, oxygen, raw material, money) and the objective is to determine both a sequence of the operations on each machine and an allocation of resource to each operation in order to minimize the over-all completion time. The algorithm for solving this problem is based on the disjunctive graph theory and branch and bound technique. Due to the present elimination properties, the search tree is strongly restricted. It applies a special system of fixing precedence relations in each node of the search tree yielding strong lower and upper bounds. To quickly obtain a strong upper bound a special heuristic method is used (to calculate a good initial solution) and the descendants in the search tree are chosen in order of the non-decreasing lower bounds. Some computational resul...

Journal ArticleDOI
TL;DR: A new approximate algorithm for multidimensional zero-one knapsack problems with all positive coefficients is presented and the solution found was on the average within 0.34% of the optimum and the computation time was the shortest compared with three other well-known heuristics.
Abstract: A new approximate algorithm for multidimensional zero-one knapsack problems with all positive coefficients is presented. The procedure is controlled by three parameters which affect the tradeoff between solution quality and computation time and whose values are set by the users. For 48 test problems with 5 to 20 constraints and 6 to 500 variables, the solution found was on the average within 0.34% of the optimum and the computation time was the shortest compared with three other well-known heuristics.

Proceedings ArticleDOI
01 Jun 1988
TL;DR: EPOXY will size a circuit's transistors and will attempt small circuit changes to help meet the constraints, and the system provides a flexible framework within which to evaluate the effects of different area and electrical models, as well as different optimization algorithms.
Abstract: Electrical performance and area improvement are important parts of the overall VLSI design task. Given designer specified constraints on area, delay, and power, EPOXY will size a circuit's transistors and will attempt small circuit changes to help meet the constraints. In addition, the system provides a flexible framework within which to evaluate the effects of different area and electrical models, as well as different optimization algorithms. Since the sum of transistor area is a better measure of dynamic power than cell area, a more accurate area model is presented. Optimization of a CMOS eight-stage inverter chain illustrates this difference; a typical minimum power implementation is 32.3% larger than the one for minimum area. The combination of a TILOS-style heuristic and augmented Lagrangian optimization algorithm yields quality results rapidly. EPOXY'S circuit analysis is from 5 to 56 times faster than Crystal.