scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1999"


Journal ArticleDOI
TL;DR: An improved ant system algorithm for the Vehicle RoutingProblem with one central depot and identical vehicles is presented and a comparison with five other metaheuristic approaches for solving Vehicle Routed Problems is given.
Abstract: The Ant System is a distributed metaheuristic that combines an adaptive memory with alocal heuristic function to repeatedly construct solutions of hard combinatorial optimizationproblems. In this paper, we present an improved ant system algorithm for the Vehicle RoutingProblem with one central depot and identical vehicles. Computational results on fourteenbenchmark problems from the literature are reported and a comparison with five othermetaheuristic approaches for solving Vehicle Routing Problems is given.

652 citations


Journal ArticleDOI
TL;DR: An efficient approach for solving capacitated single allocationhub location problems using a modified version of a previous mixed integer linearprogramming formulation developed by us for p‐hub median problems, with fewer variables and constraints than those traditionally used in the literature.
Abstract: In this paper, we present an efficient approach for solving capacitated single allocationhub location problems We use a modified version of a previous mixed integer linearprogramming formulation developed by us for p‐hub median problems This formulationrequires fewer variables and constraints than those traditionally used in the literature Wedevelop good heuristic algorithms for its solution based on simulated annealing (SA) andrandom descent (RDH) We use the upper bound to develop an LP‐based branch and boundsolution method The problem, as we define it, finds applications in the design of postaldelivery networks, particularly in the location of capacitated mail sorting and distributioncentres We test our algorithms on data obtained from this application To the best of ourknowledge, this problem has not been solved in the literature Computational results arepresented indicating the usefulness of our approach

331 citations


Journal ArticleDOI
TL;DR: The linear relaxation of this model provides a strong lower bound for the bin‐packing problem and leads to tractable branch‐and‐bound trees for the instances under consideration.
Abstract: We explore an arc flow formulation with side constraints for the one‐dimensionalbin‐packing problem. The model has a set of flow conservation constraints and a set ofconstraints that force the appropriate number of items to be included in the packing. Themodel is tightened by fixing some variables at zero level, to reduce the symmetry of thesolution space, and by introducing valid inequalities. The model is solved exactly using abranch‐and‐price procedure that combines deferred variable generation and branch‐and‐bound.At each iteration, the subproblem generates a set of columns, which altogether correspondto an attractive valid packing for a single bin. We describe this subproblem, and theway it is modified in the branch‐and‐bound phase, after the branching constraints are addedto the model. We report the computational times obtained in the solution of the bin‐packingproblems from the OR‐Library test data sets. The linear relaxation of this model provides astrong lower bound for the bin‐packing problem and leads to tractable branch‐and‐boundtrees for the instances under consideration.

268 citations


Journal ArticleDOI
TL;DR: Structural properties of and algorithms for stochastic integer programming models, mainly considering linear two‐stage models with mixed‐integer recourse (and their multi‐stage extensions) are surveyed.
Abstract: We survey structural properties of and algorithms for stochastic integer programmingmodels, mainly considering linear two‐stage models with mixed‐integer recourse (and theirmulti‐stage extensions).

221 citations


Journal ArticleDOI
TL;DR: The work is developed by investigating the question of how landscapes change under different search operators in the case of the n/m/P/Cmax flowshop problem, and proposing a statistical randomisation test to provide anumerical assessment of the landscape.
Abstract: Heuristic search methods have been increasingly applied to combinatorial optimizationproblems. While a specific problem defines a unique search space, different “landscapes”are created by the different heuristic search operators used to search it. In this paper, asimple example will be used to illustrate the fact that the landscape structure changes withthe operator; indeed, it often depends even on the way the operators are applied. Recentattention has focused on trying to better understand the nature of these “landscapes”. Recentwork by Boese et al. [2] has shown that instances of the TSP are often characterised by a“big valley” structure in the case of a 2‐opt exchange operator, and a particular distancemetric. In this paper, their work is developed by investigating the question of how landscapeschange under different search operators in the case of the n/m/P/Cmax flowshop problem.Six operators and four distance metrics are defined, and the resulting landscapes examined.The work is further extended by proposing a statistical randomisation test to provide anumerical assessment of the landscape. Other conclusions relate to the existence of ultra‐metricity,and to the usefulness or otherwise of hybrid neighbourhood operators.

219 citations


Journal ArticleDOI
TL;DR: It is shown that an environment‐dependentorder‐up‐to level (i.e., base‐stock) policy is optimal when the order cost is linearin order quantity and that a two‐parameter environment‐ dependent (s, S)policy is optimal under reasonable conditions.
Abstract: We consider infinite‐horizon periodic‐review inventory models with unreliable suppliers where the demand, supply and cost parameters change with respect to a randomly changing environment. Although our analysis will be in the context of an inventory model, it is also appropriate for production systems with unreliable machines where planning is done on a periodic basis. It is assumed that the environmental process follows a Markov chain. The stock‐flow equations of the inventory system subject to environmental fluctuations is represented using a two‐dimensional stochastic process. We show that an environment‐dependentorder‐up‐to level (i.e., base‐stock) policy is optimal when the order cost is linearin order quantity. When there is also a fixed cost of ordering, we show that a two‐parameter environment‐dependent (s, S) policy is optimal under reasonable conditions. We also discuss computational issues and some extensions.

158 citations


Journal ArticleDOI
TL;DR: Computational results show that the proposed algorithm produces optimal solutions for all test problems, and that it is very efficient in terms of time compared to existingalgorithms in the literature.
Abstract: In this paper, the uncapacitated facility location problem is considered. A tabu searchalgorithm for solving this problem is proposed. The algorithm is tested on some standardtest problems taken from literature and its performance is compared with the known optimalsolutions. Computational results show that the proposed algorithm produces optimal solutionsfor all test problems, and that it is very efficient in terms of time compared to existingalgorithms in the literature.

143 citations


Journal ArticleDOI
TL;DR: A set of satisfiability tests and time‐bound adjustmentalgorithms that can be applied to cumulative scheduling problems and show that the second condition is closely related to the subset bound, awell‐known lower bound of the m‐machine problem.
Abstract: This paper presents a set of satisfiability tests and time‐bound adjustmentalgorithms that can be applied to cumulative scheduling problems. An instance of thecumulative scheduling problem (CuSP) consists of (1) one resource witha given capacity, and (2) a set of activities, each having a release date, adeadline, a processing time and a resource capacityrequirement. The problem is to decide whether there exists a start time assignment to allactivities such that at no point in time the capacity of the resource is exceeded and alltiming constraints are satisfied. The cumulative scheduling problem can be seen as a relaxationof the decision variant of the resource‐constrained project scheduling problem.We present three necessary conditions for the existence of a feasible schedule. Two ofthem are obtained by polynomial relaxations of the CuSP. The third is based on energeticreasoning. We show that the second condition is closely related to the subset bound, awell‐known lower bound of the m‐machine problem. We also present three algorithms,based on the previously mentioned necessary conditions, to adjust release dates anddeadlines of activities. These algorithms extend the time‐bound adjustment techniquesdeveloped for the one‐machine problem. They have been incorporated in a branch andbound procedure to solve the resource‐constrained project scheduling problem.Computational results are reported.

130 citations


Journal ArticleDOI
TL;DR: This paper represents an integration of Mixed Integer Programming (MIP) and ConstraintLogic Programming (CLP) which, like MIP, tightens bounds rather than adding constraints during search.
Abstract: This paper represents an integration of Mixed Integer Programming (MIP) and ConstraintLogic Programming (CLP) which, like MIP, tightens bounds rather than adding constraintsduring search. The integrated system combines components of the CLP system ECLiPSe[7] and the MIP system CPLEX [5], in which constraints can be handled by either one orboth components. Our approach is introduced in three stages. Firstly, we present an automatictransformation which maps CLP programs onto such CLP programs that any disjunction iseliminated in favour of auxiliary binary variables. Secondly, we present improvements ofthis mapping by using a committed choice operator and translations of pre‐defined non‐linearconstraints. Thirdly, we introduce a new hybrid algorithm which reduces the solutionspace of the problem progressively by calling finite domain propagation of ECLiPSe aswell as dual simplex of CPLEX. The advantages of this integration are illustrated by efficientlysolving difficult optimisation problems like the Hoist Scheduling Problem [23]and the Progressive Party Problem [27].

114 citations


Journal ArticleDOI
TL;DR: A new parallel tabu search heuristic for the vehicle routing problem with time window constraints (VRPTW) is described, based on simple customer shifts and allows us to consider infeasible interim‐solutions.
Abstract: In this paper, we describe a new parallel tabu search heuristic for the vehicle routingproblem with time window constraints (VRPTW). The neighborhood structure we proposeis based on simple customer shifts and allows us to consider infeasible interim‐solutions.Similarly to the column generation approach used in exact algorithms, all routes generatedby the tabu search heuristic are collected in a pool. To obtain a new initial solution forthe tabu search heuristic, a fast set covering heuristic is periodically applied to the routes inthe pool. The parallel heuristic has been implemented on a Multiple‐Instruction Multiple‐Datacomputer architecture with eight nodes. Computational results for Solomon's benchmarkproblems demonstrate that our parallel heuristic can produce high‐quality solutions.

112 citations


Journal ArticleDOI
TL;DR: The basic principles of this methodology, currently used by financial institutions worldwide, are revised, and it is shown how inverse problems infinance can be naturally formulated in this framework.
Abstract: Portfolio replication is a powerful tool that has proven in practice its applicability toenterprise‐wide risk problems such as static hedging in complete and incomplete marketsand markets that gap; strategic asset and capital allocation; benchmark tracking; design ofsynthetic products; and portfolio compression In this paper, we revise the basic principlesbehind this methodology, currently used by financial institutions worldwide, and presentseveral practical examples of its application We further show how inverse problems infinance can be naturally formulated in this framework In contrast to mean‐variance optimization,the scenario approach allows for general non-normal, discrete and subjectivedistributions, as well as for the accurate modeling of the full range of nonlinear instruments,such as options It also provides an intuitive, operational framework for explaining basicfinancial theory

Journal ArticleDOI
TL;DR: A simple heuristic is developed and an exchange procedure based on the tabu searchmetastrategy is applied to improve given solutions of the Steiner tree problem with hop constraints.
Abstract: The Steiner tree problem in graphs is to determine a minimum cost subgraph of a givengraph spanning a set of specified vertices. In certain telecommunication networks, additionalconstraints such as, e.g., reliability constraints, have to be observed. Assume that a certainreliability is associated with each arc of the network, measuring the probability that therespective arc is operational. In case there has to be a guarantee that each message sent froma root vertex to a specified vertex reaches its destination with a certain probability, so‐calledhop constraints may be used to model the respective generalization. In this paper, we discussthe Steiner tree problem with hop constraints, i.e., a generalization of Steiner's problem ingraphs where the number of arcs (hops) between a root node and any of the specified verticesis limited. A mathematical programming formulation is provided and extended to handleproblem instances of moderate size. As the Steiner tree problem with hop constraints is NP‐hard,a simple heuristic is developed and an exchange procedure based on the tabu searchmetastrategy is applied to improve given solutions. Numerical results are discussed for anumber of problem instances derived from, e.g., well‐known benchmark instances of Steiner'sproblem in graphs.

Journal ArticleDOI
TL;DR: This paper investigates the question of whether the assumption of the “representative agent”, often made in economic modeling, is innocuous or whether it may be misleading under certain circumstances, and shows that a negligibledifference between the parameters that characterize the two producers can give dynamicevolutions that are qualitatively different from that of the symmetric game.
Abstract: In this paper, we investigate the question of whether the assumption of the “representativeagent”, often made in economic modeling, is innocuous or whether it may be misleadingunder certain circumstances. In order to obtain some insight into this question, two dynamicCournot duopoly games are considered, whose dynamics are represented by discrete‐timedynamical systems. For each of these models, the dynamical behavior of the duopoly systemwith identical producers is compared to that with quasi‐identical ones, in order to study theeffects of small heterogeneities between the players. In the case of identical players, suchdynamical systems become symmetric, and this implies that synchronized dynamics can beobtained, governed by a simpler one‐dimensional model whose dynamics summarizes thecommon behavior of the two identical players. In both the examples, we show that a negligibledifference between the parameters that characterize the two producers can give dynamicevolutions that are qualitatively different from that of the symmetric game, i.e. a breakingof the symmetry can cause a noticeable effect. The presence of such bifurcations suggeststhat economic systems with quasi‐identical agents may evolve quite differently from systemswith truly identical agents. This contrasts with the assumption, very common in the economicliterature, that small heterogeneities of agents do not matter too much.

Journal ArticleDOI
TL;DR: A periodic maintenance model in which both preventive and corrective maintenance are imperfect, and two other imperfect maintenance models are proposed in this paper.
Abstract: Minimal repair and perfect repair, discussed extensively in the reliability and maintenanceliterature, represent two extreme types of repairs. Many repair or maintenanceactivities may not fall into these two extreme cases. A periodic maintenance model in whichboth preventive and corrective maintenance are imperfect, and two other imperfect maintenancemodels are proposed in this paper. Unlike other imperfect maintenance models,imperfect maintenance in this work is mainly treated in a way that successive operatingtimes of a system are independent and decreasing by a fraction and successive maintenancetimes are independent and increasing by a fraction. The limiting expected maintenance costrate and availability are derived and optimum maintenance policies are discussed for thesethree maintenance models. Based on the obtained maintenance cost rate and availability, aclass of optimization problems with nonlinear programming formulations is demonstrated anda numerical example is presented.

Journal ArticleDOI
TL;DR: This investigation considers an application in which the demands are unknown prior to the creation of vehicle routes, but follow some known probability distribution, and solves a single‐vehicle problem with a relaxed IP.
Abstract: In the Vehicle Routing literature, investigations have concentrated on problems in whichthe customer demands are known precisely. We consider an application in which the demandsare unknown prior to the creation of vehicle routes, but follow some known probabilitydistribution. Because of the variability in customer demands, it is possible that the actualtotal customer demand may exceed the capacity of the vehicle assigned to service thosecustomers. In this case, we have a route failure , and there is an additional cost related to thecustomer at which the vehicle stocks out. We aim to find routes that minimise the sum of thedistance travelled plus any additional expected costs due to route failure. Because of thedifficulty of this problem, this investigation only considers a single‐vehicle problem. Tofind optimal routes, the integer L‐shaped method is used. We solve a relaxed IP in which thedistance travelled is modelled exactly, but the expected costs due to route failure are approximated.Constraints are dynamically added to prevent subtours and to further improve therelaxation. Additional constraints (optimality cuts) are added which progressively form atighter approximation of the costs due to route failure. Gendreau et al. [6] apply a similarmethodology to a closely related problem. They add optimality cuts, each of which imposesa useful bound on the route failure cost for only one solution. In addition to that cut, wegenerate “general” optimality cuts, each of which imposes a useful bound on the route failurecost for many solutions. Computational results attesting to the success of this approach arepresented.

Journal ArticleDOI
TL;DR: This problem of scheduling jobs with release dates and sequence‐dependent processing times on a single machine is considered to minimize the total completion time and a dynamic programming formulation from which lower bounds are derived is given.
Abstract: We consider the problem of scheduling jobs with release dates and sequence‐dependentprocessing times on a single machine to minimize the total completion time. We show thatthis problem is equivalent to the Cumulative Traveling Salesman Problem with additionaltime constraints. For this latter problem, we give a dynamic programming formulation fromwhich lower bounds are derived. Two heuristic algorithms are proposed. Performanceanalysis of both lower bounds and heuristics on randomly generated test problems are carriedout. Moreover, the application of the model and algorithms to the real problem of sequencinglanding aircraft in the terminal area of a congested airport is analyzed. Computational resultson realistic data sets show that heuristic solutions can be effective in practical contexts.

Journal ArticleDOI
TL;DR: It is shown that the dynamics of the Kaldor‐Kalecki model depends crucially on the time‐delay parameter T ‐ the gestation time period of investment, and that the limit cycle behaviour is independent of the assumption of nonlinearity of the investment function.
Abstract: The question of the determination of investment decisions and their links with economicactivity leads us to formulate a new business cycle model. It is based on the dynamic multiplierapproach and the distinction between investment and implementation. The study of thenonlinear behaviour of the Kaldor‐Kalecki model represented by the second‐order delaydifferential equations is presented. It is shown that the dynamics depends crucially on thetime‐delay parameter T ‐ the gestation time period of investment. We apply the Poincare‐Andronov‐Hopf bifurcation theorem generalized for functional differential equations. Itallows us to predict the occurrence of a limit cycle bifurcation for the time‐delay parameterT = T bif. The dependence of T = T bif on the parameters of our model is discussed. As T is increased, the system bifurcates to limit cycle behaviour, then to multiply periodic andaperiodic cycles, and eventually tends towards chaotic behaviour. Our analysis of the dynamicsof the Kaldor‐Kalecki model gives us that the limit cycle behaviour is independent of theassumption of nonlinearity of the investment function. The limit cycle is created only due tothe time‐delay parameter via the Hopf bifurcation mechanism. We also show that for a smalltime‐delay parameter, the Kaldor‐Kalecki model assumes the form of the Lienard equation.

Journal ArticleDOI
TL;DR: Some results of systematic studies of fine‐grained parallelversions of the island model of genetic algorithms and of variants of the neighborhood model on the massively parallel computer MasPar MP1 with 16k processing elements are presented.
Abstract: In this paper, we present some results of our systematic studies of fine‐grained parallelversions of the island model of genetic algorithms and of variants of the neighborhood model(also called diffusion model) on the massively parallel computer MasPar MP1 with 16kprocessing elements. These parallel genetic algorithms have been applied to a range ofdifferent problems (e.g. traveling salesman, capacitated lot sizing, resource‐constrainedproject scheduling, flow shop, and warehouse location problems) in order to obtain anempirical basis for statements on their optimization quality.

Journal ArticleDOI
TL;DR: In a large computational study, it is found that the morphing procedure does not degrade the performance of an SA heuristic for SCPs with low degrees of cost and coverage correlation, and that it improves the performance for problems with high degrees of such correlation.
Abstract: We report on the use of a morphing procedure in a simulated annealing (SA) heuristicdeveloped for set‐covering problems (SCPs). Morphing enables the replacement of columnsin solution with similar but more effective columns (morphs). We developed this procedureto solve minimum cardinality set‐covering problems (MCSCPs) containing columns whichexhibit high degrees of coverage correlation, and weighted set‐covering problems (WSCPs)that exhibit high degrees of both cost correlation and coverage correlation. Such correlationstructures are contained in a wide variety of real‐world problems including many scheduling,design, and location applications. In a large computational study, we found that the morphingprocedure does not degrade the performance of an SA heuristic for SCPs with low degreesof cost and coverage correlation (given a reasonable amount of computation time), and thatit improves the performance of an SA heuristic for problems with high degrees of suchcorrelations.

Journal ArticleDOI
TL;DR: This paper presents ZRAM, a portable parallel library of exhaustive search algorithms, as a case study that proves the feasibility of achieving simultaneously the goals of portability, efficiency, and convenience of use.
Abstract: Distributed and parallel computation is, on the one hand, the cheapest way to increaseraw computing power. Turning parallelism into a useful tool for solving new problems, onthe other hand, presents formidable challenges to computer science. We believe that parallelcomputation will spread among general users mostly through the ready availability of convenientand powerful program libraries. In contrast to general‐purpose languages, a programlibrary is specialized towards a well‐defined class of problems and algorithms. This narrowfocus permits developers to optimize algorithms, once and for all, for parallel computers ofa variety of common architectures. This paper presents ZRAM, a portable parallel library ofexhaustive search algorithms, as a case study that proves the feasibility of achieving simultaneouslythe goals of portability, efficiency, and convenience of use. Examples of massivecomputations successfully performed with the help of ZRAM illustrate its capabilities anduse.

Journal ArticleDOI
TL;DR: A new lower bound for the open‐shop problem is presented, at least as good as LB and improves ittypically by 4%, which is remarkable for a shop problem known for its rather small gaps between LB and the optimal makespan.
Abstract: In this paper, we present a new lower bound for the open‐shop problem.In shop problems,a classical lower bound LB is the maximum of job durations and machineloads. Contrary tothe flow‐shop and job‐shop problems, the open‐shop lacks tighter bounds.For the generalopen‐shop problem OS, we propose an improved bound defined as theoptimal makespan ofa relaxed open‐shop problem OS k. In OS k, the tasks of any job may be simultaneous, except for a selected job k. We prove the NP-hardness of OS k.However, for a fixed processingroute of k, OS k boils down to subset‐sumproblems which can quickly be solved via dynamicprogramming. From this property, we define a branch‐and‐bound method for solvingOS kwhich explores the possible processing routes of k. The resultingoptimal makespan givesthe desired bound for the initial problem OS. We evaluate the method ondifficult instancescreated by a special random generator, in which all job durations and all machine loads areequal to a given constant. Our new lower bound is at least as good as LBand improves ittypically by 4%, which is remarkable for a shop problem known for its rather small gapsbetween LB and the optimal makespan. Moreover, the computational timeson a PC arequite small on average. As a by‐product of the study, we determined and propose to theresearch community a set of very hard open‐shop instances, for which the new boundimproves LB by up to 30%.

Journal ArticleDOI
TL;DR: A Lagrangian relaxation method is developed to compute lower bounds on the optimal value of the linear programming formulations and feasible solutions of the integer programming model and a simulated annealing algorithm is designed to improve upon some of the upper bounds returned by thelagrangian relaxational algorithm.
Abstract: In this paper, we consider a problem relevant to the telecommunications industry In atwo‐level concentrator access network, each terminal has to be connected to a first‐levelconcentrator, which in turn must be connected to a second‐level concentrator If no extracomplicating constraints are taken into account, the problem, translated into the language ofdiscrete location theory, amounts to an extension to two levels of facilities of the simpleplant location problem (SPLP) A straightforward formulation can be used, but we proposea more complicated model involving more variables and constraints We show that the linearprogramming relaxations of both formulations have the same optimal values However, thesecond formulation can be tightened by using a family of polyhedral cuts that define facetsof the convex hull of integer solutions We develop a Lagrangian relaxation method tocompute lower bounds on the optimal value of the linear programming formulations andfeasible solutions of the integer programming model A simulated annealing algorithm isalso designed to improve upon some of the upper bounds returned by the Lagrangian relaxationalgorithm Experiments show the effectiveness of the formulation incorporating poly‐hedralcuts and of an approach combining a Lagrangian relaxation method and a simulatedannealing algorithm

Journal ArticleDOI
TL;DR: The reduction of the n‐city traveling salesman problem to that of finding a shortest source‐sink path in a layered network with a number of arcs linear in n and exponential in the parameter k provides a compact linear programming formulation.
Abstract: We consider the n‐city traveling salesman problem (TSP), symmetric or asymmetric,with the following attributes. In one case, a positive integer k and an ordering (1,..., n) ofthe cities is given, and an optimal tour is sought subject to the condition that for any pairi, j ∈ (1..., n), if j ≥ i + k, then i precedes j in the tour. In another case, position i in the tourhas to be assigned to some city within k positions from i in the above ordering. This case isclosely related to the TSP with time windows. In a third case, an optimal tour visiting m outof n cities is sought subject to constraints of the above two types. This is a special case ofthe Prize Collecting TSP (PCTSP). In any of the three cases, k may be replaced by city‐specificintegers k(i), i = 1,..., n. These problems arise in practice. For each class, we reducethe problem to that of finding a shortest source‐sink path in a layered network with a numberof arcs linear in n and exponential in the parameter k (which is independent of the problemsize). Besides providing linear time algorithms for the solution of these problems, the reductionto a shortest path problem also provides a compact linear programming formulation.Finally, for TSPs or PCTSPs that do not have the required attributes, these algorithms canbe used as heuristics that find in linear time a local optimum over an exponential‐sizeneighborhood.

Journal ArticleDOI
TL;DR: A new heuristic with anasymptotic worst-case bound of 3/2 and O(n log2n)running time is presented, concerned with a variant of the classical one‐dimensionalbin‐packing problem.
Abstract: We are concerned with a variant of the classical one‐dimensionalbin‐packing problem. n items have to be packed into unit‐capacity bins such that the total number of used bins isminimized with the additional constraint that at most k items can beassigned to one bin. In 1975, Krause et al. analyzed several approximation algorithms forthis problem and showed that they all have an asymptotic worst‐case performance ratioof 2. No better algorithms have been found so far. We present a new heuristic with anasymptotic worst-case bound of 3/2 and O(n log2 n)running time.

Journal ArticleDOI
TL;DR: In this paper, a parallel implementation of the nested Benders decomposition algorithm is described, which employs a farming technique to parallelize nodal subproblem solutions and achieves near linear speed-up.
Abstract: Multistage stochastic linear programming has many practical applications for problemswhose current decisions have to be made under future uncertainty. There are a variety ofmethods for solving the deterministic equivalent forms of these dynamic problems, includingthe simplex and interior‐point methods and nested Benders decomposition, which decomposesthe original problem into a set of smaller linear programming problems and hasrecently been shown to be superior to the alternatives for large problems. The Benderssubproblems can be visualised as being attached to the nodes of a tree which is formed fromthe realisations of the random data process determining the uncertainty in the problem. Thispaper describes a parallel implementation of the nested Benders algorithm which employsa farming technique to parallelize nodal subproblem solutions. Differing structures of thetest problems cause differing levels of speed‐up on a variety of multicomputing platforms:problems with few variables and constraints per node do not gain from this parallelisation.We therefore employ stage aggregation to such problems to improve their parallel solutionefficiency by increasing the size of the nodes and therefore the time spent calculating relativeto the time spent communicating between processors. A parallel version of a sequentialimportance sampling solution algorithm based on local expected value of perfect information(EVPI) is developed which is applicable to extremely large multistage stochastic linearprogrammes which either have too many data paths to solve directly or a continuous distributionof possible realisations. It utilises the parallel nested Benders algorithm and a parallelversion of an algorithm designed to calculate the local EVPI values for the nodes of the treeand achieves near linear speed‐up.

Journal ArticleDOI
TL;DR: This paper compares the behavior of various indices under shifting process conditions and makes recommendations for selection of indices at differing levels of process performance.
Abstract: Practitioners of industrial statistics are generally familiar with the common C p and C pk process capability indices. However, many additional indices have been proposed, and knowledge of these is less widespread. More importantly, information regarding the indices' comparative behavior is lacking. This paper compares the behavior of various indices under shifting process conditions. Both useful and misleading characteristics of the indices are identified. We begin with a short history of process capability measures. Several process capability indices are reviewed. Application areas for capability indices are also summarized. The indices are grouped according to the loss functions which are used in their interpretation. Characteristics of the various indices are discussed. Finally, recommendations are made for selection of indices at differing levels of process performance.

Journal ArticleDOI
TL;DR: A computational study for the Job Shop Scheduling Problem and a statistical analysis of the search space reveals the impact of inherent properties of the problem on local search based heuristics.
Abstract: A computational study for the Job Shop Scheduling Problem is presented. Thereby,emphasis is put on the structure of the search space as it appears for local search. A statisticalanalysis of the search space reveals the impact of inherent properties of the problem onlocal search based heuristics.

Journal ArticleDOI
TL;DR: A new framework for equilibrium selection is presented, which suggests that playing games recurrently inspace and time may render one of the equilibria “spatially dominant”, and is compared with the Harsanyi‐Selten risk‐dominance concept.
Abstract: A new framework for equilibrium selection is presented. Playing games recurrently inspace and time may render one of the equilibria “spatially dominant”. Prevailing initially ona large enough finite part of the space, it will take over on the whole space in the long run.In particular it will drive out the other equilibria along travelling waves. This new dominanceconcept is compared with the Harsanyi‐Selten risk‐dominance concept.

Journal ArticleDOI
TL;DR: Surprisingly, the BeFS‐based strategies turn out to be inferior to the DFS‐ based strategies, both in terms of running times and in Terms of bound calculations performed.
Abstract: The Best‐First Search strategy (BeFS) and the Depth‐First Search strategy (DFS) areregarded as the prime strategies when solving combinatorial optimization problems by parallelBranch‐and‐Bound (B&B) ‐ BeFS because of efficiency with respect to the number of nodesexplored, and DFS for reasons of space efficiency.

Journal ArticleDOI
TL;DR: This paper presents an approach to the problem of optimal dynamic choice in discrete or continuous time where there is a direct tradeoff of growth versus security, and yields simple two‐dimensional graphs analogous to static mean variance analysist that capture the essence of the dynamic problem in a form useful for sound investmentanalysis.
Abstract: This paper presents an approach to the problem of optimal dynamic choice in discrete orcontinuous time where there is a direct tradeoff of growth versus security. In each period,the investor must allocate the available resources among various risky assets. The maximizationof the expected logarithm of the period‐by‐period wealth, called the capital growthor the Kelly criterion, has many desirable properties such as maximizing the asymptoticrate of asset growth. However, this strategy has low risk aversion and typically has verylarge wagers which yield high variance of wealth. With uncertain parameters, this can leadto overbetting and loss of wealth. Using fractional Kelly strategies leads to a less volatileand safer sequence of wealth levels with less growth. The investor can choose a desirabletradeoff of growth and security appropriate for the problem under consideration. Thisapproach yields simple two‐dimensional graphs analogous to static mean variance analysisthat capture the essence of the dynamic problem in a form useful for sound investmentanalysis. Use of the approach in practice is illustrated on favorable investments in blackjack,horse racing, lotto games, index and commodity futures and options trading.