scispace - formally typeset
Search or ask a question

Showing papers in "Informs Journal on Computing in 1998"


Journal ArticleDOI
TL;DR: A branch-and-cut algorithm for finding an optimal OP solution, based on several families of valid inequalities, is described and proved to be able to solve to optimality large-scale instances involving up to 500 nodes, within acceptable computing time.
Abstract: In the Orienteering Problem (OP), we are given an undirected graph with edge weights and node prizes. The problem calls for a simple cycle whose total edge weight does not exceed a given threshold, while visiting a subset of nodes with maximum total prize. This NP-hard problem arises in routing and scheduling applications. We describe a branch-and-cut algorithm for finding an optimal OP solution. The algorithm is based on several families of valid inequalities. We also introduce a family of cuts, called conditional cuts, which can cut off the optimal OP solution, and propose an effective way to use them within the overall branch-and-cut framework. Exact and heuristic separation algorithms are described, as well as heuristic procedures to produce near-optimal OP solutions. An extensive computational analysis on several classes of both real-world and random instances is reported. The algorithm proved to be able to solve to optimality large-scale instances involving up to 500 nodes, within acceptable computing time. This compares favorably with previous published methods.

306 citations


Journal ArticleDOI
TL;DR: Computational tests of three approaches to feature selection algorithm via concave minimization on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage method for reducing neural network complexity.
Abstract: The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints. Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage method for reducing neural network complexity. One feature selection algorithm via concave minimization reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4.

191 citations


Journal ArticleDOI
TL;DR: This paper introduces a new binary encoding scheme to represent solutions, together with a heuristic to decode the binary representations into actual sequences, and compares it to the usual "natural" permutation representation for descent, simulated annealing, threshold accepting, tabu search and genetic algorithms on a large set of test problems.
Abstract: This paper presents several local search heuristics for the problem of scheduling a single machine to minimize total weighted tardiness. We introduce a new binary encoding scheme to represent solutions, together with a heuristic to decode the binary representations into actual sequences. This binary encoding scheme is compared to the usual "natural" permutation representation for descent, simulated annealing, threshold accepting, tabu search and genetic algorithms on a large set of test problems. Computational results indicate that all of the heuristics which employ our binary encoding are very robust in that they consistently produce good quality solutions, especially when multistart implementations are used instead of a single long run. The binary encoding is also used in a new genetic algorithm which performs very well and requires comparatively little computation time. A comparison of neighborhood search methods which use the permutation and binary representations shows that the permutation-based methods have a higher likelihood of generating an optimal solution, but are less robust in that some poor solutions are obtained. Of the neighborhood search methods, tabu search clearly dominates the others. Multistart descent performs remarkably well relative to simulated annealing and threshold accepting.

184 citations


Journal ArticleDOI
TL;DR: A novel exact-solution approach for solving the multiple-allocation case of the p-hub median problem is described and it is shown how a similar method can be adapted for solved the more difficult single-allocated case.
Abstract: The problem of locating hub facilities arises in the design of transportation and telecommunications networks. The p-hub median problem belongs to a class of discrete location-allocation problems in which all the hubs are fully interconnected. Nonhub nodes may be either uniquely or multiply allocated to hubs. The hubs are uncapacitated and the total number of hubs, p is specified a priori. We describe a novel exact-solution approach for solving the multiple-allocation case of the p-hub median problem and show how a similar method can be adapted for solving the more difficult single-allocation case. The methods for both of these solve shortest-path problems to obtain lower bounds, which are used in a branch-and-bound scheme to obtain the exact solution. Numerical results show the superiority of this new approach over traditional LP-based methods.

151 citations


Journal ArticleDOI
TL;DR: This work uses branch and infer, a unifying framework for integer linear programming and finite domain constraint programming, to compare the two approaches with respect to their modeling and solving capabilities, and to introduce symbolic constraint abstractions into integer programming.
Abstract: We introduce branch and infer, a unifying framework for integer linear programming and finite domain constraint programming. We use this framework to compare the two approaches with respect to their modeling and solving capabilities, to introduce symbolic constraint abstractions into integer programming, and to discuss possible combinations of the two approaches.

132 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a cutting plane algorithm for solving the following telecommunications network design problem: given point-to-point traffic demands in a network, specified survivability requirements and a discrete cost/capacity function for each link, find minimum cost capacity expansions satisfying the given demands.
Abstract: We present a cutting plane algorithm for solving the following telecommunications network design problem: given point-to-point traffic demands in a network, specified survivability requirements and a discrete cost/capacity function for each link, find minimum cost capacity expansions satisfying the given demands. This algorithm is based on the polyhedral study described in [19]. In this article we describe the underlying problem, the model and the main ingredients in our algorithm. This includes: initial formulation, feasibility test, separation for strong cutting planes, and primal heuristics. Computational results for a set of real-world problems are reported.

130 citations


Journal ArticleDOI
TL;DR: An efficient numerical method is developed forfitting ARTA processes and its implementation in the software ARTAFACTS is discussed, and the software ARTAGEN that generates observations from ARta processes for use as inputs to a computer simulation is presented.
Abstract: An ARTA (AutoRegressive-to-Anything)Process is a time series with arbitrary marginal distribution and autocorrelation structure specified through finite lag p. We develop an efficient numerical method forfitting ARTA processes and discuss its implementation in the software ARTAFACTS. We also present the software ARTAGEN that generates observations from ARTA processes for use as inputs to a computer simulation. We illustrate the use of the software with a real-world example.

112 citations


Journal ArticleDOI
TL;DR: This article applies matrix-splitting-like method to discrete-time optimal control problems formulated as extended linear-quadratic programs in the manner advocated by Rockafellar and Wets, and develops a highly parallel algorithm.
Abstract: This article applies splitting techniques developed for set-valued maximal monotone operators to monotone affine variational inequalities, including, as a special case, the classical linear complementarity problem. We give a unified presentation of several splitting algorithms for monotone operators, and then apply these results to obtain two classes of algorithms for affine variational inequalities. The second class resembles classical matrix splitting, but has a novel "under-relaxation" step, and converges under more general conditions. In particular, the convergence proofs do not require the affine operator to be symmetric. We specialize our matrix-splitting-like method to discrete-time optimal control problems formulated as extended linear-quadratic programs in the manner advocated by Rockafellar and Wets. The result is a highly parallel algorithm, which we implement and test on the Connection Machine CM-5 computer family.

88 citations


Journal ArticleDOI
TL;DR: In this article, variable redefinition is used to strengthen a multicommodity flow (MCF) model for minimum spanning and Steiner trees with hop constraints between a root node and any other node.
Abstract: We use variable redefinition (see R. MARTIN, 1987. Generating Alternative Mixed-Integer Programming Models Using Variable Redefinition, Operations Research 35, 820Â 831) to strengthen a multicommodity flow (MCF) model for minimum spanning and Steiner trees with hop constraints between a root node and any other node. Hop constraints model quality of service constraints. The Lagrangean dual value associated with one Lagrangean relaxation derived from the MCF formulation dominates the corresponding LP value. However, the lower bounds given after a reasonable number of iterations of the associated subgradient optimization procedure are, for several cases, still far from the theoretical best limit. Martin's variable redefinition technique is used to obtain a generalization of the MCF formulation whose LP bound is equal to the previously mentioned Lagrangean dual bound. We use a set of instances with up to 100 nodes, 50 basic nodes, and 350 edges for comparing an LP approach based on solving the LP relaxation of the new model with the equivalent Lagrangean scheme derived from MCF.

82 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed algorithm for the generation of state space graphs is proposed, which takes advantage of the combined memory readily available on a network of workstations and can be linked to a number of existing system modeling tools.
Abstract: High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models often requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems that can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this article we report on the implementation of a distributed state space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multicomputer.

75 citations


Journal ArticleDOI
TL;DR: A branch and bound algorithm for finding a maximum stable set in a graph that uses cliques, odd cycles, and a maximum matching on the remaining nodes to construct strong upper bounds.
Abstract: We present a branch and bound algorithm for finding a maximum stable set in a graph. The algorithm uses properties of the stable set polytope to construct strong upper bounds. Specifically, it uses cliques, odd cycles, and a maximum matching on the remaining nodes. The cliques are generated via standard coloring heuristics, and the odd cycles are generated from blossoms found by a matching algorithm. We report computational experience on two classes of randomly generated graphs and on the DIMACS Challenge Benchmark graphs. These experiments indicate that the algorithm is quite effective, particularly for sparse graphs.

Journal ArticleDOI
TL;DR: A hierarchical algorithm for approximating shortest paths between all pairs of nodes in a large-scale network and explores the magnitude of tradeoffs between computational savings and associated errors both analytically and empirically with a case study of the Southeast Michigan traffic network.
Abstract: We propose a hierarchical algorithm for approximating shortest paths between all pairs of nodes in a large-scale network. The algorithm begins by extracting a high-level subnetwork of relatively long links (and their associated nodes) where routing decisions are most crucial. This high-level network partitions the shorter links and their nodes into a set of lower-level subnetworks. By fixing gateways within the high-level network for entering and exiting these subnetworks, a computational savings is achieved at the expense of optimality. We explore the magnitude of these tradeoffs between computational savings and associated errors both analytically and empirically with a case study of the Southeast Michigan traffic network. An order-of-magnitude drop in computation times was achieved with an on-line route guidance simulation, at the expense of less than 6% increase in expected trip times.

Journal ArticleDOI
TL;DR: A design problem arising in the color printing industry is described and a number of integer linear programming and constraint programming approaches to its solution are discussed, showing that the constraint programming approach provides better results, although in some cases after considerable handcrafting.
Abstract: We describe a design problem arising in the color printing industry and discuss a number of integer linear programming and constraint programming approaches to its solution. Despite the apparent simplicity of the problem it presents a challenge for both approaches. We present results for three typical cases and show that the constraint programming approach provides better results, although in some cases after considerable handcrafting. We also show that the results obtained by constraint programming can be improved by a simple goal programming model.

Journal ArticleDOI
TL;DR: The comparative performance of Integer Programming and Constraint Logic Programming is explored by examining a number of models for four different combinatorial optimization applications, and an analysis of performance with respect to problem and model characteristics is presented.
Abstract: The comparative performance of Integer Programming (IP) and Constraint Logic Programming (CLP) is explored by examining a number of models for four different combinatorial optimization applications. Computational results show contrasting behavior for the two approaches, and an analysis of performance with respect to problem and model characteristics is presented. The analysis shows that tightness of formulation is of great benefit to CLP where effective search reduction results in problems that can be solved quickly. In IP, if the linear feasible region does not identify the corresponding integer polytope, the problem may be difficult to solve. The paper identifies other characteristics of model behavior and concludes by examining ways in which IP and CLP may be incorporated within hybrid solvers.

Journal ArticleDOI
TL;DR: Methods for combining multiple client-side and server-side technologies are critical to OR/MS's use of these technologies, and various emerging technologies for developing computational applications give the OR/ MS worker a rich armament for building Web-based versions of conventional applications.
Abstract: The World Wide Web has already affected OR/MS work in a significant way, and holds great potential for changing the nature of OR/MS products and the OR/MS software economy. Web technologies are relevant to OR/MS work in two ways. First, the Web is a multimedia communication system. Originally based on an information pull model, it is-critically for OR/MS-being extended for information push as well. Second, it is a large distributed computing environment in which OR/MS products-interactive computational applications-can be made available, and interacted with, over a global network. Enabling technologies for Web-based execution of OR/MS applications are classified into those involving client-side execution and server-side execution. Methods for combining multiple client-side and server-side technologies are critical to OR/MS's use of these technologies. These methods, and various emerging technologies for developing computational applications, give the OR/MS worker a rich armament for building Web-based versions of conventional applications. They also enable a new class of distributed applications working on real-time data. Web technologies are expected to encourage the development of OR/MS products as specialized component applications that can be bundled to solve real-world problems. Effective exploitation, for OR/MS purposes, of these technological innovations will also require initiatives, changes, and greater involvement by OR/MS organizations.

Journal ArticleDOI
TL;DR: Two related methods for deriving probability distribution estimates using approximate rational Laplace transform representations are proposed, addressing the question of the number of terms, or the order, involved in a generalized hyperexponential, phase-type, or Coxian distribution, a problem not adequately treated by existing methods.
Abstract: We propose two related methods for deriving probability distribution estimates using approximate rational Laplace transform representations Whatever method is used, the result is a Coxian estimate for an arbitrary distribution form or plain sample data, with the algebra of the Coxian often simplifying to a generalized hyperexponentia l or phase-type The transform (or, alternatively, the moment-generating function) is used to facilitate the computations and leads to an attractive algorithm For method one, the first 2N - 1 derivatives of the transform are matched with those of an approximate rational function; for the second method, a like number of values of the transform are matched with those of the approximation The numerical process in both cases begins with an empirical Laplace transform or truncation of the actual transform, and then requires only the solution of a relatively small system of linear equations, followed by root finding for a low-degree polynomial Besides the computationally attractive features of the overall procedure, it addresses the question of the number of terms, or the order, involved in a generalized hyperexponential, phase-type, or Coxian distribution, a problem not adequately treated by existing methods Coxian distributions are commonly used in the modeling of single-stage and network queueing problems, inventory theory, and reliability analyses They are particularly handy in the development of large-scale model approximations

Journal ArticleDOI
TL;DR: A variety of variance-reduction techniques that can be used to improve the quality of the objective-function approximations derived from sampled data are presented within the context of SLP objective- function evaluations.
Abstract: Planning under uncertainty requires the explicit representation of uncertain quantities within an underlying decision model. When the underlying model is a linear program, the representation of certain data elements as random variables results in a stochastic linear program (SLP). Precise evaluation of an SLP objective function requires the solution of a large number of linear programs, one for each possible combination of the random variables' outcomes. To reduce the effort required to evaluate the objective function, approximations, especially those derived from randomly sampled data, are often used. In this article, we explore a variety of variance-reduction techniques that can be used to improve the quality of the objective-function approximations derived from sampled data. These techniques are presented within the context of SLP objective-function evaluations. Computational results offering an empirical comparison of the level of variance reduction afforded by the various methods are included.

Journal ArticleDOI
TL;DR: An algorithm for identifying the extreme rays of the conical hull of a finite set of vectors whose generated cone is pointed is presented and it is demonstrated that for a wide range of problems it is computationally superior to the standard approach.
Abstract: We present an algorithm for identifying the extreme rays of the conical hull of a finite set of vectors whose generated cone is pointed. This problem appears in diverse areas including sto-chastic programming, computational geometry, and non-parametric efficiency measurement. The standard approach consists of solving a linear program for every element of the set of vectors. The new algorithm differs in that it solves fewer and substantially smaller LPs. Extensive computational testing vali-dates the algorithm and demonstrates that for a wide range of problems it is computationally superior to the standard approach.

Journal ArticleDOI
TL;DR: A new method, based on the nested dissection heuristic, provides significantly better orderings than the most commonly used ordering method, minimum degree, on a variety of large-scale linear programming problems.
Abstract: The main cost of solving a linear programming problem using an interior point method is usually the cost of solving a series of sparse, symmetric linear systems of equations, AΘATx = b. These systems are typically solved using a sparse direct method. The first step in such a method is a reordering of the rows and columns of the matrix to reduce fill in the factor and/or reduce the required work. This article evaluates several methods for performing fill-reducing ordering on a variety of large-scale linear programming problems. We find that a new method, based on the nested dissection heuristic, provides significantly better orderings than the most commonly used ordering method, minimum degree.

Journal ArticleDOI
TL;DR: The concept of interval disclosure where a datum is compromised if the answered queries provide enough information to establish that it is contained in a given interval even if the datum cannot be determined exactly is introduced.
Abstract: We deal with the question of how to maintain security of confidential information in a database while answering as many queries as possible. The database is assumed to operate in a query restriction (as opposed to perturbation) mode in which exact answers are given to those queries which, together with those already answered, will not compromise any confidential datum. Those which fail this criterion are not answered. We introduce the concept of interval disclosure where a datum is compromised if the answered queries provide enough information to establish that it is contained in a given interval even if the datum cannot be determined exactly. Models are presented for the problem of deciding whether to answer a query and three techniques, one based on linear programming, are developed and tested.

Journal ArticleDOI
TL;DR: This article extends traditional crash procedures whereby triangularity of the basis is always maintained and take advantage of the problem characteristics; it applies additional criteria for determining the entering structural variables and their position in the basis.
Abstract: The performance of the revised sparse simplex method, or its variants, for the solution of large scale linear programming problems can be considerably enhanced if an advanced starting basis is introduced instead of the traditional all-logical initial basis. We consider three such procedures known as crash procedures to obtain advanced starting points for the revised sparse simplex: 1) We extend traditional crash procedures whereby triangularity of the basis is always maintained and take advantage of the problem characteristics; we apply additional criteria for determining the entering structural variables and their position in the basis. 2) We allow the triangularity to be slightly "violated," and make some cheap pivot steps to find a better basis. 3) We use the iterative successive-over-relaxation technique to compute a non-extreme point solution and make a crossover to a basic solution. In this article, we first review the work done by other researchers. Then we present an analysis of issues, followed by a description of the techniques and our computational experience.

Journal ArticleDOI
TL;DR: The article will briefly review some aspects of their separate development that give rise to connections between the two fields of integer linear programming and constraint logic programming and describe some of these connections which appear to have potential for harnessing and for future research.
Abstract: The fields of integer linear programming and constraint logic programming have developed from different standpoints in mathematics and operations research, but offer synergy. This article will briefly review some aspects of their separate development that give rise to connections between the two fields and will describe some of these connections which appear to have potential for harnessing and for future research. The article will then introduce the other four articles in this cluster, which further explore and develop these interconnections.

Journal ArticleDOI
TL;DR: This article uses competitive analysis to evaluate on-line scheduling strategies for controlling a new generation of networked reprographic machines currently being developed by companies such as Xerox Corporation, and proves some lower bounds on the performance of any online algorithm with finite lookahead.
Abstract: This article describes our use of competitive analysis and the on-line model of computation in a product development setting; specifically, we use competitive analysis to evaluate on-line scheduling strategies for controlling a new generation of networked reprographic machines (combination printer-copier-fax machines servicing a network) currently being developed by companies such as Xerox Corporation. We construct an abstract machine model, the multipass assembly line, which not only models networked reprographic machines but also models several common manufacturing environments such as a robotic assembly line or a mixed product assembly line. We consider on-line algorithms with finite lookahead because these machines typically have limited knowledge of the future. We first prove some lower bounds on the performance of any online algorithm with finite lookahead. We then show that simple greedy algorithms achieve competitive ratios that are close to these general lower bounds. In particular, we show that lookahead improves the competitive ratio of these simple greedy algorithms from approximately 2 (with no lookahead) to being arbitrarily close to 1 (for large lookahead). This implies these simple greedy algorithms are realistic candidates for field use in future reprographic products.

Journal ArticleDOI
TL;DR: This paper deals with an incremental branch-and-bound method which solves exactly the satisfiability problem and leads to an efficient implementation which compares favorably with the classical Davis-Putnam-Loveland procedure and its incremental version designed by Hooker.
Abstract: The satisfiability problem is to check whether a set of clauses in propositional logic is satisfiable. If it is satisfiable, the incremental satisfiability problem is then to check whether satisfiability remains given additional clauses. This paper deals with an incremental branch-and-bound method which solves exactly both problems. This method includes flexible Lagrangean relaxations, metaheuristics, and judicious jumping back. This leads to an efficient implementation which compares favorably with the classical Davis-Putnam-Loveland procedure and its incremental version designed by Hooker. Numerous computational results are detailed.

Journal ArticleDOI
TL;DR: It is shown that optimal solutions for these TSPs can be known a priori, and thus, they provide us with new nontrivial TSP instances offering the possibility of testing heuristics well beyond the scope of testbed instances which have been solved by exact numerical methods.
Abstract: We show how, by a constructive process, we can generate arbitrarily large instances of the Traveling Salesman Problem (TSP) using standard fractals such as those of Peano, Koch, or Sierpinski. We show that optimal solutions for these TSPs can be known a priori, and thus, they provide us with new nontrivial TSP instances offering the possibility of testing heuristics well beyond the scope of testbed instances which have been solved by exact numerical methods. Furthermore, instances may be constructed with different features, for example, with different fractal dimensions. To four of these fractal TSPs we apply three standard constructive heuristics, namely Multiple Fragment, Nearest Neighbor, and Farthest Insertion from Convex-Hull, which have efficient general-purpose implementations. The ability of different algorithms to solve these different fractal TSPs gives us significant insight into the nature of TSP heuristics in a way which is complementary to approaches such as worst-case or average-case analysis.

Journal ArticleDOI
TL;DR: A large integer programming model developed at the MITRE Corporation for minimizing air traffic delay is presented, demonstrating that the model can be solved to provable optimality in real time for problem instances involving over 1 million binary variables.
Abstract: Growth in traffic and changes in traffic patterns have caused an increase in the congestion and delay in the National Airspace System. Air traffic delay is very costly to the airlines, and minimizing this delay has been a subject of research for over a decade. A large integer programming model developed at the MITRE Corporation for minimizing air traffic delay is presented. Solving problem instances arising from this model involves the use of preprocessing, constraint strengthening, and a carefully designed computer implementation. Results are presented, demonstrating that the model can be solved to provable optimality in real time for problem instances involving over 1 million binary variables.

Journal ArticleDOI
TL;DR: It is shown that despite the strong interactions between various statements involving the operators, the Ascend modeling language does possess certain desirable mathematical and computational properties.
Abstract: The use of strong typing, exemplified in the Ascend modeling language, is a recent phenomenon in executable modeling languages for mathematical modeling. It is also one that has significant potential for improving the functionality of computer-based modeling environments. Besides being a strongly typed language, Ascend is unique in providing operators that allow dynamic type inference, a feature that has been shown to be useful in assisting model evolution and reuse. We develop formal semantics for the type system in Ascend-focusing on these operators-and analyze its mathematical and computational properties. We show that despite the strong interactions between various statements involving the operators, the language does possess certain desirable mathematical and computational properties. Further, our analysis identifies general issues in the design and implementation of type systems in mathematical modeling languages. The methods used in the article are applicable beyond Ascend to a class of typed modeling languages that may be developed in the future.

Journal ArticleDOI
TL;DR: The metrics introduced in this article have a rigorous theoretical, as well as empirical, grounding in software engineering and represent a new area of application in simulation modeling and analysis.
Abstract: In this article, we introduce complexity measures for simulation models. The framework of simulation graphs sets the context. A quantifiable measure of complexity is useful in an a priori evaluation of proposed simulation studies that must be completed within a specified budget. They can also be useful in classifying simulation models to obtain a thorough test bed of models to be used in simulation methodology research. The metrics introduced in this article have a rigorous theoretical, as well as empirical, grounding in software engineering. As such, simulation modeling and analysis represent a new area of application. Some surrogate measures of run time complexity are also developed. In particular, we provide estimates for the size of the future events list (or the pending event set). The proposed metrics are illustrated and compared through a limited set of examples. Limitations of the current approach as well as directions for future research are discussed.

Journal ArticleDOI
TL;DR: An adaptable routing scheme for telecommunication networks based on the notion of virtual clustering that significantly outperforms traditional schemes under dynamic traffic fluctuation is proposed.
Abstract: We propose an adaptable routing scheme for telecommunication networks based on the notion of virtual clustering. Virtual clustering logically reconfigures the network topology in a periodic basis according to the traffic requirements in the network. This temporary topology configuration imposes restrictions on the potential paths between certain groups of source?destination node pairs, thus regulating and balancing the global network traffic while maintaining sufficient local flexibility for real-time routing. The proposed routing scheme can be implemented in an established telecommunication environment where the actual routing of network traffic is conducted by an existing dynamic routing algorithm, while the routing tables are preprocessed by the current virtual clustering configuration. We develop a virtual clustering algorithm using problem space search proposed by Storer, Wu and Vaccari (1992, New Search Spaces for Sequencing Problems with Applications to Job Shop Schedule, Management Science 38, 1495-1509; 1996. Local Search in Problem and Heuristic Space for Job Shop Scheduling, ORSA Journal on Computing 7, 453-465). The algorithm iteratively evaluates clustering solutions and its potential impact to routing performance by perturbation of the traffic requirement matrix. The proposed routing scheme is tested through intensive computational experiments on randomly generated geometric networks, and on PREPNET, a real-world subnetwork on the Internet. Using Monte Carlo simulation, we compare the proposed routing scheme to centralized routing and decentralized asynchronous routing. Computational results show that the proposed approach significantly outperforms these traditional schemes under dynamic traffic fluctuation.

Journal ArticleDOI
TL;DR: The feature article by Bhargava and Krishnan does a wonderful job of outlining the current state of the Web and its future possibilities, but in this commentary, a group that the article does not cover in depth: the users of OR/MS.
Abstract: The feature article by Bhargava and Krishnan does a wonderful job of outlining the current state of the Web and its future possibilities. The Web is an active and ever-changing place; providing a snapshot of the state of the art is not easy. In this commentary, I would like to concentrate on a group that the article does not cover in depth: the users of OR/MS. The number of users of OR/MS is vastly larger than the producers of OR/MS, and has been affected perhaps even more by the Web. I will concentrate on two types of users: those who do not know what OR/MS is but have found our community through the Web, and our students.