scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 1996"


Journal ArticleDOI
TL;DR: Several modifications to the basic genetic procedures are proposed including a new fitness-based crossover operator (fusion), a variable mutation rate and a heuristic feasibility operator tailored specifically for the set covering problem.

670 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore design principles for next-generation optical wide-area networks, employing wavelength-division multiplexing (WDM) and targeted to nationwide coverage, and formulate the virtual topology design problem as an optimization problem with one of two possible objective functions: (1) for a given traffic matrix, minimize the networkwide average packet delay (corresponding to a solution for present traffic demands), or (2) maximize the scale factor by which the traffic matrix can be scaled up (to provide the maximum capacity upgrade for future traffic demands).
Abstract: We explore design principles for next-generation optical wide-area networks, employing wavelength-division multiplexing (WDM) and targeted to nationwide coverage. This optical network exploits wavelength multiplexers and optical switches in routing nodes, so that an arbitrary virtual topology may be embedded on a given physical fiber network. The virtual topology, which is used as a packet-switched network and which consists of a set of all-optical "lightpaths", is set up to exploit the relative strengths of both optics and electronics-viz. packets of information are carried by the virtual topology "as far as possible" in the optical domain, but packet forwarding from lightpath to lightpath is performed via electronic switching, whenever required. We formulate the virtual topology design problem as an optimization problem with one of two possible objective functions: (1) for a given traffic matrix, minimize the network-wide average packet delay (corresponding to a solution for present traffic demands), or (2) maximize the scale factor by which the traffic matrix can be scaled up (to provide the maximum capacity upgrade for future traffic demands). Since simpler versions of this problem have been shown to be NP-hard, we resort to heuristic approaches. Specifically, we employ an iterative approach which combines "simulated annealing" (to search for a good virtual topology) and "flow deviation" (to optimally route the traffic-and possibly bifurcate its components-on the virtual topology). We do not consider the number of available wavelengths to be a constraint, i.e., we ignore the routing of lightpaths and wavelength assignment for these lightpaths. We illustrate our approaches by employing experimental traffic statistics collected from NSFNET.

476 citations


Journal ArticleDOI
TL;DR: In this article, a large-scale multicommodity, multi-modal network flow problem with time windows is formulated and two solution methods for a very complex logistical problem in disaster relief management.
Abstract: This paper presents a formulation and two solution methods for a very complex logistical problem in disaster relief management. The problem to be addressed is a large-scale multicommodity, multi-modal network flow problem with time windows. Due to the nature of this problem, the size of the optimization model which results from its formulation grows extremely rapidly as the number of modes and/or commodities increase. The formulation of the problem is based on the concept of a time-space network. Two heuristic algorithms are proposed. One is a heuristic which exploits an inherent network structure of the problem with a set of side constraints and the other is an interactive fix-and-run heuristic. The findings of the model implementation are also presented using artificially generated data sets. The performance of the solution methods are examined over a range of small and large problems.

438 citations


Journal ArticleDOI
TL;DR: The problem formulation for solving the multiple constant multiplication (MCM) problem is introduced where first the minimum number of shifts that are needed is computed, and then the number of additions is minimized using common subexpression elimination.
Abstract: Many applications in DSP, telecommunications, graphics, and control have computations that either involve a large number of multiplications of one variable with several constants, or can easily be transformed to that form. A proper optimization of this part of the computation, which we call the multiple constant multiplication (MCM) problem, often results in a significant improvement in several key design metrics, such as throughput, area, and power. However, until now little attention has been paid to the MCM problem. After defining the MCM problem, we introduce an effective problem formulation for solving it where first the minimum number of shifts that are needed is computed, and then the number of additions is minimized using common subexpression elimination. The algorithm for common subexpression elimination is based on an iterative pairwise matching heuristic. The power of the MCM approach is augmented by preprocessing the computation structure with a new scaling transformation that reduces the number of shifts and additions. An efficient branch and bound algorithm for applying the scaling transformation has also been developed. The flexibility of the MCM problem formulation enables the application of the iterative pairwise matching algorithm to several other important and common high level synthesis tasks, such as the minimization of the number of operations in constant matrix-vector multiplications, linear transforms, and single and multiple polynomial evaluations. All applications are illustrated by a number of benchmarks.

362 citations


Journal ArticleDOI
TL;DR: This paper responds to recent criticisms in Biological Conservation of heuristic reserve selection algorithms and shows that heuristics have practical advantages over classical methods and that suboptimality is not necessarily a disadvantage for many real-world applications.

325 citations


Journal ArticleDOI
TL;DR: A tabu search heuristic is developed for a version of the stochastic vehicle routing problem where customers are present at locations with some probabilities and have random demands and produces an optimal solution in 89.45% of cases.
Abstract: This paper considers a version of the stochastic vehicle routing problem where customers are present at locations with some probabilities and have random demands. A tabu search heuristic is developed for this problem. Comparisons with known optimal solutions on problems whose sizes vary from 6 to 46 customers indicate that the heuristic produces an optimal solution in 89.45% of cases, with an average deviation of 0.38% from optimality.

305 citations


Journal ArticleDOI
TL;DR: Techniques that were originally developed in statistical mechanics can be applied to search problems that arise commonly in artificial intelligence and predict that abrupt changes in computational cost should occur universally, as heuristic effectiveness or search space topology is varied.

268 citations


Journal Article
TL;DR: In this article, the authors introduce a geometric version of the Covering Salesman Problem, in which each salesman's clients specify a neighborhood in which they are willing to meet the salesman.

260 citations


Proceedings Article
03 Dec 1996
TL;DR: This work uses a reinforcement learning method to find dynamic channel allocation policies that are better than previous heuristic solutions and results are presented on a large cellular system with approximately 4949 states.
Abstract: In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns. We present results on a large cellular system with approximately 4949 states.

252 citations


Journal ArticleDOI
TL;DR: A penalty guided genetic algorithm which efficiently and effectively searches over promising feasible and infeasible regions to identify a final, feasible optimal, or near optimal, solution.

242 citations


Journal ArticleDOI
TL;DR: Results obtained with the proposed model do not indicate an exponential growth in the computational time required for larger problems, and are general enough to encompass both resource leveling and limited resource allocation problems unlike existing methods, which are class-dependent.
Abstract: A new approach for resource scheduling using genetic algorithms (GAs) is presented here. The methodology does not depend on any set of heuristic rules. Instead, its strength lies in the selection and recombination tasks of the GA to learn the domain of the specific project network. By this it is able to evolve improved schedules with respect to the objective function. Further, the model is general enough to encompass both resource leveling and limited resource allocation problems unlike existing methods, which are class-dependent. In this paper, the design and mechanisms of the model are described. Case studies with standard test problems are presented to demonstrate the performance of the GA-scheduler when compared against heuristic methods under various resource availability profiles. Results obtained with the proposed model do not indicate an exponential growth in the computational time required for larger problems.

15 May 1996
TL;DR: The main contributions of this thesis are an 8-fold speedup and 4-fold memory size reduction over the baseline Sphinx-II system, and the improvement in speed is obtained from the following techniques: lexical tree search, phonetic fast match heuristic, and global best path search of the word lattice.
Abstract: : Advances in speech technology and computing power have created a surge of interest in the practical application of speech recognition. However, the most accurate speech recognition systems in the research world are still far too slow and expensive to be used in practical, large vocabulary continuous speech applications. Their main goal has been recognition accuracy, with emphasis on acoustic and language modelling. But practical speech recognition also requires the computation to be carried out in real time within the limited resources CPU power and memory size of commonly available computers. There has been relatively little work in this direction while preserving the accuracy of research systems. In this thesis, we focus on efficient and accurate speech recognition. It is easy to improve recognition speed and reduce memory requirements by trading away accuracy, for example by greater pruning, and using simpler acoustic and language models. It is much harder to improve both the recognition speed and reduce main memory size while preserving the accuracy. This thesis presents several techniques for improving the overall performance of the CMU Sphinx-II system. Sphinx-II employs semi-continuous hidden Markov models for acoustics and trigram language models, and is one of the premier research systems of its kind. The techniques in this thesis are validated on several widely used benchmark test sets using two vocabulary sizes of about 20K and 58K words. The main contributions of this thesis are an 8-fold speedup and 4-fold memory size reduction over the baseline Sphinx-II system. The improvement in speed is obtained from the following techniques: lexical tree search, phonetic fast match heuristic, and global best path search of the word lattice.

Proceedings Article
03 Sep 1996
TL;DR: In this paper, the authors present an optimization algorithm with complete rank-ordering, which is polynomial in the number of user-defined predicates (for a given number of relations).
Abstract: Relational databases provide the ability to store user-defined functions and predicates which can be invoked in SQL queries. When evaluation of a user-defined predicate is relatively expensive, the traditional method of evaluating predicates as early as possible is no longer a sound heuristic. There are two previous approaches for optimizing such queries. However, neither is able to guarantee the optimal plan over the desired execution space. We present efficient techniques that are able to guarantee the choice of an optimal plan over the desired execution space. The optimization algorithm with complete rank-ordering improves upon the naive optimization algorithm by exploiting the nature of the cost formulas for join methods and is polynomial in the number of user-defined predicates (for a given number of relations.) We also propose pruning rules that significantly reduce the cost of searching the execution space for both the naive algorithm as well as for the optimization algorithm with complete rank-ordering, without compromising optimality. We also propose a conservative local heuristic that is simpler and has low optimization overhead. Although it is not always guaranteed to find the optimal plans, it produces close to optimal plans in most cases. We discuss how, depending on application requirements, to determine the algorithm of choice. It should be emphasized that our optimization algorithms handle user-defined selections as well as user-defined join predicates uniformly. We present complexity analysis and experimental comparison of the algorithms.

Journal ArticleDOI
TL;DR: This paper introduces flow-based models for designing capacitated networks and routing policies and proposes heuristic schemes based on mathematical programming for solving hub location problems and related routing policies.

Journal ArticleDOI
TL;DR: A restricted DP heuristic (a generalization of the nearest neighbor heuristic) is presented that can include all the above considerations but solves much larger problems but cannot guarantee optimality.

Journal ArticleDOI
TL;DR: This paper examines the application of a genetic algorithm used in conjunction with a local improvement procedure for solving the location-allocation problem, a traditional multifacility location problem, and demonstrated that the genetic algorithm provides the best solutions.

Journal ArticleDOI
TL;DR: This paper studies themobile removal problem in a cellular PCS network where transmitter powers are constrained and controlled by a Distributed Constrained Power Control algorithm, and shows that finding the optimal removal set is an NP-Complete problem, giving rise for heuristic algorithms.
Abstract: In this paper we study the mobile removal problem in a cellular PCS network where transmitter powers are constrained and controlled by a Distributed Constrained Power Control (DCPC) algorithm. Receivers are subject to non-negligible noise, and the DCPC attempts to bring each receiver's CIR above a given target. To evaluate feasibility and computational complexity, we assume a paradigm where radio bandwidth is scarce and inter-base station connection is fast. We show that finding the optimal removal set is an NP-Complete problem, giving rise for heuristic algorithms. We study and compare among three classes of transmitter removal algorithms. Two classes consist of algorithms which are invoked only when reaching a stable power vector under DCPC. The third class consist of algorithms which combine transmitter removals with power control. These are One-by-one Removals, Multiple Removals, and Power Control with Removals Combined. In the class of power control with removals combined, we also consider a distributed algorithm which uses the same local information as DCPC does. All removal algorithms are compared with respect to their outage probabilities and their time to converge to a stable state. Comparisons are made in a hexagonal macro-cellular system, and in two metropolitan micro-cellular systems. The Power Control with Removals Combined algorithm emerges as practically the best approach with respect to both criteria.

Journal ArticleDOI
TL;DR: This article proves that for the case of a single address register the decision problem is NP-complete, even for a single basic block, and generalizes the problem to multiple address registers.
Abstract: DSP architectures typically provide indirect addressing modes with autoincrement and decrement. In addition, indexing mode is generally not available, and there are usually few, if any, general-purpose registers. Hence, it is necessary to use address registers and perform address arithmetic to access automatic variables. Subsuming the address arithmetic into autoincrement and decrement modes improves the size of the generated code. In this article we present a formulation of the problem of optimal storage assignment such that explicit instructions for address arithmetic are minimized. We prove that for the case of a single address register the decision problem is NP-complete, even for a single basic block. We then generalize the problem to multiple address registers. For both cases heuristic algorithms are given, and experimental results are presented.

Book ChapterDOI
22 Sep 1996
TL;DR: An approach is presented to incorporate problem specific knowledge into a genetic algorithm which is used to compute near-optimum solutions to traveling salesman problems (TSP).
Abstract: In this paper, an approach is presented to incorporate problem specific knowledge into a genetic algorithm which is used to compute near-optimum solutions to traveling salesman problems (TSP). The approach is based on using a tour construction heuristic for generating the initial population, a tour improvement heuristic for finding local optima in a given TSP search space, and new genetic operators for effectively searching the space of local optima in order to find the global optimum. The quality and efficiency of solutions obtained for a set of TSP instances containing between 318 and 1400 cities are presented.

Journal ArticleDOI
TL;DR: Comparisons with an interchange heuristic demonstrate that simulated annealing has potential as a solution technique for solving location-planning problems and further research should be encouraged.
Abstract: Simulated annealing is a computational approach that simulates an annealing schedule used in producing glass and metals. Originally developed by Metropolis et al. in 1953, it has since been applied to a number of integer programming problems, including the p-median location-allocation problem. However, previously reported results by Golden and Skiscim in 1986 were less than encouraging. This article addresses the design of a simulated-annealing approach for the p-median and maximal covering location problems. This design has produced very good solutions in modest amounts of computer time. Comparisons with an interchange heuristic demonstrate that simulated annealing has potential as a solution technique for solving location-planning problems and further research should be encouraged.

Book ChapterDOI
22 Sep 1996
TL;DR: This article focuses on the experimental study of the sensitivity of the Ant-Q algorithm to its parameters and on the investigation of synergistic effects when using more than a single ant.
Abstract: Ant-Q is an algorithm belonging to the class of ant colony based methods, that is, of combinatorial optimization methods in which a set of simple agents, called ants, cooperate to find good solutions to combinatorial optimization problems. The main focus of this article is on the experimental study of the sensitivity of the Ant-Q algorithm to its parameters and on the investigation of synergistic effects when using more than a single ant. We conclude comparing Ant-Q with its ancestor Ant System, and with other heuristic algorithms.

Journal ArticleDOI
TL;DR: This paper introduces and evaluates two distributed algorithms for finding multicast trees in point-to-point data networks based on the centralized Steiner heuristics, the shortest path heuristic (SPH) and the Kruskal-based shortest pathHeuristic (K-SPH), and shows that the competitiveness of these algorithms was, on the average, 25% better in comparison to that of the pruned spanning-tree approach.
Abstract: Establishing a multicast tree in a point-to-point network of switch nodes, such as a wide-area asynchronous transfer mode (ATM) network, can be modeled as the NP-complete Steiner problem in networks. In this paper, we introduce and evaluate two distributed algorithms for finding multicast trees in point-to-point data networks. These algorithms are based on the centralized Steiner heuristics, the shortest path heuristic (SPH) and the Kruskal-based shortest path heuristic (K-SPH), and have the advantage that only the multicast members and nodes in the neighborhood of the multicast tree need to participate in the execution of the algorithm. We compare our algorithms by simulation against a baseline algorithm, the pruned minimum spanning-tree heuristic that is the basis of many previously published algorithms for finding multicast trees. Our results show that the competitiveness (the ratio of the sum of the heuristic tree's edge weights to that of the best solution found) of both of our algorithms was, on the average, 25% better in comparison to that of the pruned spanning-tree approach. In addition, the competitiveness of our algorithms was, in almost all cases, within 10% of the best solution found by any of the Steiner heuristics considered, including both centralized and distributed algorithms. Limiting the execution of the algorithm to a subset of the nodes in the network results in an increase in convergence time over the pruned spanning-tree approach, but this overhead can be reduced by careful implementation.

Proceedings ArticleDOI
24 Mar 1996
TL;DR: A new graph-theoretic formulation of the RAW problem, dubbed as layered-graph, has been proposed which provides an efficient tool for solving dynamic as well as static RAW problems and provides a framework for obtaining exact optimal solution for the number of requested lightpaths and far the throughput that a given network can support.
Abstract: We consider the problem of routing and assignment of wavelength (RAW) in optical networks. Given a set of requests for all-optical connections (or lightpaths), the problem is to (a) find routes from the source nodes to their respective destination nodes, and (b) assign wavelengths to these routes. Since the number of wavelengths is limited, lightpaths cannot be established between every pair of access nodes. In this paper we first consider the dynamic RAW problem where lightpath requests arrive randomly with exponentially distributed call holding times. Then, the static RAW problem is considered which assumes that all the lightpaths that are to be set-up in the network are known initially. Several heuristic algorithms have already been proposed for establishing a maximum number of lightpaths out of a given set of requests. However most of these algorithms are based an the traditional model of circuit-switched networks where routing and wavelength assignment steps are decoupled. In this paper a new graph-theoretic formulation of the RAW problem, dubbed as layered-graph, has been proposed which provides an efficient tool for solving dynamic as well as static RAW problems. The layered-graph model also provides a framework for obtaining exact optimal solution for the number of requested lightpaths as well as far the throughput that a given network can support. A dynamic and two static RAW schemes are proposed which are based on the layered-graph model. Layered-graph-based RAW schemes are shown to perform better than the existing ones.

Proceedings Article
04 Aug 1996
TL;DR: A new abstraction-induced search technique, "Hierarchical A*", is introduced that gets around two difficulties: first, by drawing from a different class of abstractions, "homomorphism abstractions," and, secondly, by using novel caching techniques to avoid repeatedly expanding the same states in successive searches in the abstract space.
Abstract: ion, in search, problem solving, and planning, works by replacing one state space by another (the "abstract" space) that is easier to search. The results of the search in the abstract space are used to guide search in the original space. For instance, the length of the abstract solution can be used as a heuristic for A* in searching in the original space. However, there are two obstacles to making this work efficiently. The first is a theorem (Valtorta, 1984) stating that for a large class of abstractions, "embedding abstractions," every state expanded by blind search must also be expanded by A* when its heuristic is computed in this way. The second obstacle arises because in solving a problem A* needs repeatedly to do a full search of the abstract space while computing its heuristic. This paper introduces a new abstraction-induced search technique, "Hierarchical A*," that gets around both of these difficulties: first, by drawing from a different class of abstractions, "homomorphism abstractions," and, secondly, by using novel caching techniques to avoid repeatedly expanding the same states in successive searches in the abstract space. Hierarchical A* outperforms blind search on all the search spaces studied.

Book ChapterDOI
01 Jan 1996
TL;DR: The paper shall present brief overviews for the most successful meta-heuristics, and concludes with future directions in this growing area of research.
Abstract: Meta-heuristics are the most recent development in approximate search methods for solving complex optimization problems, that arise in business, commerce, engineering, industry, and many other areas. A meta-heuristic guides a subordinate heuristic using concepts derived from artificial intelligence, biological, mathematical, natural and physical sciences to improve their performance. We shall present brief overviews for the most successful meta-heuristics. The paper concludes with future directions in this growing area of research.

01 Dec 1996
TL;DR: Two new classes of pattern search algorithms for unconstrained minimization are presented: the rank ordered and the positive basis pattern search methods, which can nearly halve the worst case cost of an iteration compared to the classical patternsearch algorithms.
Abstract: We present two new classes of pattern search algorithms for unconstrained minimization: the rank ordered and the positive basis pattern search methods. These algorithms can nearly halve the worst case cost of an iteration compared to the classical pattern search algorithms. The rank ordered pattern search methods are based on a heuristic for approximating the direction of steepest descent, while the positive basis pattern search methods are motivated by a generalization of the geometry characteristic of the patterns of the classical methods. We describe the new classes of algorithms and present the attendant global convergence analysis.

Journal ArticleDOI
TL;DR: A fast parallel SAT-solver on a message based MIMD machine using optimized data structures to modify Boolean formulas and efficient workload balancing algorithms are used, to achieve a uniform distribution of workload among the processors.
Abstract: We present a fast parallel SAT-solver on a message based MIMD machine. The input formula is dynamically divided into disjoint subformulas. Small subformulas are solved by a fast sequential SAT-solver running on every processor, which is based on the Davis-Putnam procedure with a special heuristic for variable selection. The algorithm uses optimized data structures to modify Boolean formulas. Additionally efficient workload balancing algorithms are used, to achieve a uniform distribution of workload among the processors. We consider the communication network topologiesd-dimensional processor grid and linear processor array. Tests with up to 256 processors have shown very good efficiency-values (>0.95).

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple and effective scheme to efficiently determine the switch exchanges within a loop for minimum line losses, and propose a heuristic scheme to develop the optimal switch plan with minimum switch operations in order to accomplish the transition from the initial configuration to the optimal configuration.
Abstract: Successful applications of the single-loop optimization approach have been reported for resolving the distribution network reconfiguration problem. This approach was originally proposed as an intuitive heuristic method and has been understood so as well. This paper attempts to provide an analytical description and a systematic understanding about the approach via qualitative analysis. It formulates the problem as a nonlinear integer optimization problem which, if linearized, could be approximately represented by an integer LP (linear programming) problem. This understanding leads to the consideration of applying the concept of the simplex method normally used for solving LP problems, which, in turn, leads to the direct derivation of the single-loop optimization approach. This fact indicates that the single-loop optimization approach actually originates from the same technical principle as the simplex method. This paper also presents a simple and effective scheme to efficiently determine the switch exchanges within a loop for minimum line losses, and proposes a heuristic scheme to develop the optimal switch plan with minimum switch operations in order to accomplish the transition from the initial configuration to the optimal configuration. An example network is studied using the proposed approaches and satisfactory results are obtained.

Journal ArticleDOI
TL;DR: In this paper, a deterministic site-specific engineering-type flow and transport model (SUTRA) is combined with a heuristic optimization technique for groundwater remediation problems at Lawrence Livermore National Laboratory (LLNL).
Abstract: A technique for obtaining a (nearly) optimal scheme using multiple management periods has been developed. The method has been developed for very large scale combinatorial optimization problems. Simulated annealing has been extended to this problem. An importance function is developed to accelerate the search for good solutions. These tools have been applied to groundwater remediation problems at Lawrence Livermore National Laboratory (LLNL). A deterministic site-specific engineering-type flow and transport model (based on the public domain code SUTRA) is combined with the heuristic optimization technique. The objective is to obtain the time-varying optimal locations of the remediation wells that will reduce concentration levels of volatile organic chemicals in groundwater below a given threshold at specified areas on the LLNL site within a certain time frame and subject to a variety of realistic complicating factors. The cost function incorporates construction costs, operation and maintenance costs for injection and extraction wells, costs associated with piping and treatment facilities, and a performance penalty for well configurations that generate flow and transport simulations that exceed maximum concentration levels at specified locations. The resulting application reported here comprises a huge optimization problem. The importance function detailed in this paper has led to rapid convergence to solutions. The performance penalty allows different goals to be imposed on different geographical regions of the site; in this example, short-term off-site plume containment and long-term on-site cleanup are imposed. The performance of the optimization scheme and the effects of various trade-offs in management objectives are explored through examples using the LLNL site.

Journal ArticleDOI
TL;DR: A model which takes into account characteristics of the portfolio optimization problem which are disregarded in most optimization models, which generalizes one of the linear models which recently appeared in the literature as an alternative to the classical Markowitz model is presented.