scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 2009"


Journal ArticleDOI
TL;DR: This translation with annotations of E. Weiszfeld, Sur le point pour lequel la somme des distances de n points donnés est minimum, Tôhoku Mathematical Journal (first series), 43 (1937) pp. 355–386 is published.
Abstract: Translation with annotations of E. Weiszfeld, Sur le point pour lequel la somme des distances de n points donnes est minimum, Tohoku Mathematical Journal (first series), 43 (1937) pp. 355–386. A short introduction about the translation is found in Appendix A. Appendix B lists particular notations used by Weiszfeld and their now more conventional equivalents. Numbered footnotes are those of the original paper of Weiszfeld. Boxed numerals are references to observations about the translation and comments of the translator, all to be found in Appendix C.

225 citations


Journal ArticleDOI
TL;DR: A new definition for outliers is presented: cluster-based outlier, which is meaningful and provides importance to the local data behavior, and how to detect outliers by the clustering algorithm LDBSCAN which is capable of finding clusters and assigning LOF.
Abstract: Outlier detection has important applications in the field of data mining, such as fraud detection, customer behavior analysis, and intrusion detection. Outlier detection is the process of detecting the data objects which are grossly different from or inconsistent with the remaining set of data. Outliers are traditionally considered as single points; however, there is a key observation that many abnormal events have both temporal and spatial locality, which might form small clusters that also need to be deemed as outliers. In other words, not only a single point but also a small cluster can probably be an outlier. In this paper, we present a new definition for outliers: cluster-based outlier, which is meaningful and provides importance to the local data behavior, and how to detect outliers by the clustering algorithm LDBSCAN (Duan et al. in Inf. Syst. 32(7):978–986, 2007) which is capable of finding clusters and assigning LOF (Breunig et al. in Proceedings of the 2000 ACM SIG MOD International Conference on Manegement of Data, ACM Press, pp. 93–104, 2000) to single points.

183 citations


Journal ArticleDOI
TL;DR: The CGBH procedure proposed in this paper can obtain an efficient assignment of the surgical cases if the other resources are well organized and the best solution obtained among those four criteria is often very close to LP, which means that the proposed algorithm can obtain a near optimal solution.
Abstract: In some hospitals, an “open scheduling” strategy is applied to solve the operating room planning problem; i.e., surgeons can choose any workday for his surgical cases, and the staffing of anesthetists and nurses is adjusted to maximize the efficiency of operating room utilization. In this paper, we aim at obtaining an efficient operating program for an operating theatre with several multifunctional operating rooms by using this “open scheduling” strategy. First, a mathematical model is constructed to assign surgical cases to operating rooms within one week. This model complies with the availability of operating rooms and surgeons, and its objective is not only to maximize utilization of operating rooms, but to minimize their overtime cost. Then a column-generation-based heuristic (CGBH) procedure is proposed, where four different criteria are compared with each other so as to find a solution with the best performance. In addition, the best approximate solution, obtained by this CGBH procedure after running all the criteria proposed, is compared with the lower bound obtained by an explicit column generation (CG) procedure, LP, to evaluate the distance between the approximate solution obtained and the optimum one. Although no criterion, according to the experimental results, is found superior to all other three in both robustness and quality of the solution obtained, it is found that the best solution obtained among those four criteria is often very close to LP, which means that the proposed algorithm can obtain a near optimal solution. In one word, the CGBH procedure proposed in this paper can obtain an efficient assignment of the surgical cases if the other resources (anesthesia and nursing staff, equipment, beds in the recovery room and etc.) are well organized.

143 citations


Journal ArticleDOI
TL;DR: The solver presented in this paper was among the finalists in all three tracks and the winner of two of the International Timetabling Competition 2007.
Abstract: This paper provides a brief description of a constraint-based solver that was successfully applied by the author to the problem instances in all three tracks of the International Timetabling Competition 2007 (For more details see the official competition website at http://www.cs.qub.ac.uk/itc2007.). The solver presented in this paper was among the finalists in all three tracks and the winner of two.

143 citations


Journal ArticleDOI
TL;DR: Aggregation approaches to a large class of location models are surveyed, various aggregation error measures are considered, some effective (and ineffective) aggregation error Measures are identified, and some open research areas are discussed.
Abstract: Location problems occurring in urban or regional settings may involve many tens of thousands of “demand points,” usually individual private residences. In modeling such problems it is common to aggregate demand points to obtain tractable models. We survey aggregation approaches to a large class of location models, consider and compare various aggregation error measures, identify some effective (and ineffective) aggregation error measures, and discuss some open research areas.

100 citations


Journal ArticleDOI
TL;DR: A hybrid strategy that combines an evolutionary algorithm with quadratic programming is designed to solve the NP-hard problem of reproducing the performance of a stock-market index by investing in a subset of the stocks included in the index.
Abstract: Index tracking consists in reproducing the performance of a stock-market index by investing in a subset of the stocks included in the index. A hybrid strategy that combines an evolutionary algorithm with quadratic programming is designed to solve this NP-hard problem: Given a subset of assets, quadratic programming yields the optimal tracking portfolio that invests only in the selected assets. The combinatorial problem of identifying the appropriate assets is solved by a genetic algorithm that uses the output of the quadratic optimization as fitness function. This hybrid approach allows the identification of quasi-optimal tracking portfolios at a reduced computational cost.

92 citations


Journal ArticleDOI
TL;DR: A new optimization method is proposed, which is partly based on Differential Evolution and on combinatorial search, that can tackle the index-tracking problem as complex as it is, generating accurate and robust results.
Abstract: Index-tracking is a low-cost alternative to active portfolio management. The implementation of a quantitative approach, however, is a major challenge from an optimization perspective. The optimal selection of a group of assets that can replicate the index of a much larger portfolio requires both to find the optimal subset of assets and to fine-tune their weights. The former is a combinatorial, the latter a continuous numerical problem. Both problems need to be addressed simultaneously, because whether or not a selection of assets is promising depends on the allocation weights and vice versa. Moreover, the problem is usually of high dimension. Typically, an optimal subset of 30–150 positions out of 100–600 need to be selected and their weights determined. Search heuristics can be a valuable alternative to traditional methods, which often cannot deal with the problem. In this paper, we propose a new optimization method, which is partly based on Differential Evolution (DE) and on combinatorial search. The main advantage of our method is that it can tackle the index-tracking problem as complex as it is, generating accurate and robust results.

87 citations


Journal ArticleDOI
TL;DR: A modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convexfunction and has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique is given.
Abstract: This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems.

72 citations


Journal ArticleDOI
TL;DR: The results indicate that the majority of the farmers increased their efficiency along the time, which may support the existence of sustainability in agriculture.
Abstract: The aim of this paper is to use DEA models to evaluate sustainability in agriculture. Several variables are taken into account and the resulting efficiency is measured by comparison. The performance of family farms is analysed here (variables: farmed area, work force, and production). As agricultural sustainability depends on the maintenance of systems of production for long periods of time, the models were run for the years of 1986 and 2002. Tiered DEA models were used to group farmers in sustainability categories. Non-parametric regression models were used to identify the factors affecting the efficiency measurements. All the results indicate that the majority of the farmers increased their efficiency along the time. These improvements may support the existence of sustainability.

71 citations


Journal ArticleDOI
TL;DR: General conditions for inequality measures sufficient to provide such an equitable consistency are introduced and verified thus showing how they can be used in location models not leading to inferior distributions of distances.
Abstract: While making location decisions, the distribution of distances (outcomes) among the service recipients (clients) is an important issue. In order to comply with the minimization of distances as well as with an equal consideration of the clients, mean-equity approaches are commonly used. They quantify the problem in a lucid form of two criteria: the mean outcome representing the overall efficiency and a scalar measure of inequality of outcomes to represent the equity (fairness) aspects. The mean-equity model is appealing to decision makers and allows a simple trade-off analysis. On the other hand, for typical dispersion indices used as inequality measures, the mean-equity approach may lead to inferior conclusions with respect to the distances minimization. Some inequality measures, however, can be combined with the mean itself into optimization criteria that remain in harmony with both inequality minimization and minimization of distances. In this paper we introduce general conditions for inequality measures sufficient to provide such an equitable consistency. We verify the conditions for the basic inequality measures thus showing how they can be used in location models not leading to inferior distributions of distances.

68 citations


Journal ArticleDOI
TL;DR: An algorithm which finds the optimal locations, relocation times and the total cost, for all three types of distance measurements and various weight functions, is developed.
Abstract: In this paper a single facility location problem with multiple relocation opportunities is investigated. The weight associated with each demand point is a known function of time. We consider either rectilinear, or squared Euclidean, or Euclidean distances. Relocations can take place at pre-determined times. The objective function is to minimize the total location and relocation costs. An algorithm which finds the optimal locations, relocation times and the total cost, for all three types of distance measurements and various weight functions, is developed. Locations are found using constant weights, and relocations times are the solution to a Dynamic Programming or Binary Integer Programming (BIP) model. The time horizon can be finite or infinite.

Journal ArticleDOI
TL;DR: The problem considered in this paper is to find p locations for p facilities such that the weights attracted to each facility will be as close as possible to one another, and heuristic algorithms (two types of a steepest descent approach and tabu search) are proposed for its solution.
Abstract: The problem considered in this paper is to find p locations for p facilities such that the weights attracted to each facility will be as close as possible to one another. We model this problem as minimizing the maximum among all the total weights attracted to the various facilities. We propose solution procedures for the problem on a network, and for the special cases of the problem on a tree or on a path. The complexity of the problem is analyzed, O(n) algorithms and an O(pn3) dynamic programming algorithm are proposed for the problem on a path respectively for p=2 and p>2 facilities. Heuristic algorithms (two types of a steepest descent approach and tabu search) are proposed for its solution. Extensive computational results are presented.

Journal ArticleDOI
TL;DR: An optimization model to support decisions in the aggregate production planning of sugar and ethanol milling companies is presented and a case study was developed in a Brazilian mill and the results highlight the applicability of the proposed approach.
Abstract: This work presents an optimization model to support decisions in the aggregate production planning of sugar and ethanol milling companies. The mixed integer programming formulation proposed is based on industrial process selection and production lot-sizing models. The aim is to help the decision makers in selecting the industrial processes used to produce sugar, ethanol and molasses, as well as in determining the quantities of sugarcane crushed, the selection of sugarcane suppliers and sugarcane transport suppliers, and the final product inventory strategy. The planning horizon is the whole sugarcane harvesting season and decisions are taken on a discrete fraction of time. A case study was developed in a Brazilian mill and the results highlight the applicability of the proposed approach.

Journal ArticleDOI
TL;DR: A model for ranking suppliers in the presence of weight restrictions, nondiscretionary factors, and cardinal and ordinal data is proposed.
Abstract: Selecting an appropriate supplier for outsourcing is now one of the most important decisions of the purchasing department. This paper proposes a model for ranking suppliers in the presence of weight restrictions, nondiscretionary factors, and cardinal and ordinal data. A numerical example demonstrates the application of the proposed method.

Journal ArticleDOI
TL;DR: It is demonstrated that CBCFT is able to identify clusters with large variance in size and shape and is robust to outliers, and overcomes the shortcoming of the low execution efficiency of CHAMELEON.
Abstract: Clustering analysis plays an important role in the filed of data mining. Nowadays, hierarchical clustering technique is becoming one of the most widely used clustering techniques. However, for most algorithms of hierarchical clustering technique, the requirements of high execution efficiency and high accuracy of clustering result cannot be met at the same time. After analyzing the advantages and disadvantages of the hierarchical algorithms, the paper puts forward a two-stage clustering algorithm, named Chameleon Based on Clustering Feature Tree (CBCFT), which hybridizes the Clustering Tree of algorithm BIRCH with algorithm CHAMELEON. By calculating the time complexity of CBCFT, the paper argues that the time complexity of CBCFT increases linearly with the number of data. By experimenting on sample data set, this paper demonstrates that CBCFT is able to identify clusters with large variance in size and shape and is robust to outliers. Moreover, the result of CBCFT is as similar as that of CHAMELEON, but CBCFT overcomes the shortcoming of the low execution efficiency of CHAMELEON. Although the execution time of CBCFT is longer than BIRCH, the clustering result of CBCFT is much satisfactory than that of BIRCH. Finally, through a case of customer segmentation of Chinese Petroleum Corp. HUBEI branch; the paper demonstrates that the clustering result of the case is meaningful and useful.

Journal ArticleDOI
TL;DR: Assessment of the initial balance sheet position of the pension fund is important for the optimal investment strategy because of the accounting rules embedded in the model and tracking of both the market and purchasing value of assets.
Abstract: It is possible to model a wide range of portfolio management problems using stochastic programming. This approach requires the generation of input scenarios and probabilities, which represent the evolution of the return on investment, the stream of liabilities and other random phenomena of the problem and respect the no-arbitrage properties. The quality of the recommended capital allocation depends on the quality of the input scenarios and a validation of results is necessary. Appropriate scenario generation techniques and output analysis methods are described in the context of defined contribution pension fund and applied to the specific model of a Czech pension fund. The numerical results indicate various components that influence the recommended investment decisions and the fund’s achievements. In particular, the initial balance sheet position of the pension fund is important for the optimal investment strategy because of the accounting rules embedded in the model and tracking of both the market and purchasing value of assets.

Journal ArticleDOI
TL;DR: In this paper, the multiple server center location problem is introduced and the objective is to minimize the maximum time spent by any customer, including travel time and waiting time at the server sites.
Abstract: In this paper, we introduce the multiple server center location problem. p servers are to be located at nodes of a network. Demand for services of these servers is located at each node, and a subset of nodes are to be chosen to locate one or more servers in each. Each customer selects the closest server. The objective is to minimize the maximum time spent by any customer, including travel time and waiting time at the server sites. The problem is formulated and analyzed. Results for heuristic solution approaches are reported.

Journal ArticleDOI
TL;DR: A new robust algorithm for fault detection of point mechanisms is developed and it detects faults by comparing what can be considered the ‘normal’ or ‘expected’ shape of some signal with respect to the actual shape observed as new data become available.
Abstract: Point mechanisms are special track elements which failures results in delays and increased operating costs. In some cases such failures cause fatalities. A new robust algorithm for fault detection of point mechanisms is developed. It detects faults by comparing what can be considered the ‘normal’ or ‘expected’ shape of some signal with respect to the actual shape observed as new data become available. The expected shape is computed as a forecast of a combination of models. The proposed system deals with complicated features of the data in the case study, the main ones being the irregular sampling interval of the data and the time varying nature of the periodic behaviour. The system models are set up in a continuous-time framework and the system has been tested on a large dataset taken from a point mechanism operating on a commercial line.

Journal ArticleDOI
TL;DR: A class of special chance-constrained programming models and a genetic algorithm are developed and designed for the vendor selection problem and it is demonstrated that the suggested approach could provide managers a promising way for studying the stochastic vendors selection problem.
Abstract: We study a vendor selection problem in which the buyer allocates an order quantity for an item among a set of suppliers such that the required aggregate quality, service, and lead time requirements are achieved at minimum cost. Some or all of these characteristics can be stochastic and hence, we treat the aggregate quality and service as uncertain. We develop a class of special chance-constrained programming models and a genetic algorithm is designed for the vendor selection problem. The solution procedure is tested on randomly generated problems and our computational experience is reported. The results demonstrate that the suggested approach could provide managers a promising way for studying the stochastic vendor selection problem.

Journal ArticleDOI
TL;DR: This paper considers a single-machine scheduling problem with both deterioration and learning effects and several polynomial time algorithms are proposed to optimally solve the problem with the above objectives.
Abstract: This paper considers a single-machine scheduling problem with both deterioration and learning effects. The objectives are to respectively minimize the makespan, the total completion times, the sum of weighted completion times, the sum of the kth power of the job completion times, the maximum lateness, the total absolute differences in completion times and the sum of earliness, tardiness and common due-date penalties. Several polynomial time algorithms are proposed to optimally solve the problem with the above objectives.

Journal ArticleDOI
TL;DR: This work proves the existence of weighted games with this property with less than 12 voters, and simultaneously analyzes both problems by using linear programming.
Abstract: A basic problem in the theory of simple games and other fields is to study whether a simple game (Boolean function) is weighted (linearly separable). A second related problem consists in studying whether a weighted game has a minimum integer realization. In this paper we simultaneously analyze both problems by using linear programming.

Journal ArticleDOI
TL;DR: Analysis of the ip model reveals that the integrality constraints on a large number of binary variables can be relaxed without yielding a fractional solution and it is able to solve problem instances of a realistic size in a couple of seconds on average.
Abstract: Given a product design and a repair network, a level of repair analysis (lora) determines for each component in the product (1) whether it should be discarded or repaired upon failure and (2) at which echelon in the repair network to do this. The objective of the lora is to minimize the total (variable and fixed) costs. We propose an ip model that generalizes the existing models, based on cases that we have seen in practice. Analysis of our model reveals that the integrality constraints on a large number of binary variables can be relaxed without yielding a fractional solution. As a result, we are able to solve problem instances of a realistic size in a couple of seconds on average. Furthermore, we suggest some improvements to the lora analysis in the current literature.

Journal ArticleDOI
TL;DR: An iterative heuristic for the location-routing problem on the plane is presented and it is shown that it pays not to ignore the routing aspects when solving continuous location problems.
Abstract: In physical distribution the location of depots and vehicle routes are interdependent problems, but they are usually treated independently. Location-routing is the study of solving locational problems such that routing considerations are taken into account. We present an iterative heuristic for the location-routing problem on the plane. For each depot the Weber problem is solved using the end-points of the routes found previously as input nodes to the Weiszfeld procedure. Although the improvements found are usually small they show that it pays not to ignore the routing aspects when solving continuous location problems. Possible research avenues in continuous location-routing will also be suggested.

Journal ArticleDOI
TL;DR: It is shown that this method for the control of multi-class queueing networks over a finite time horizon is asymptotically optimal when the number of items that is processed and the processing speed increase.
Abstract: We propose a method for the control of multi-class queueing networks over a finite time horizon. We approximate the multi-class queueing network by a fluid network and formulate a fluid optimization problem which we solve as a separated continuous linear program. The optimal fluid solution partitions the time horizon to intervals in which constant fluid flow rates are maintained. We then use a policy by which the queueing network tracks the fluid solution. To that end we model the deviations between the queuing and the fluid network in each of the intervals by a multi-class queueing network with some infinite virtual queues. We then keep these deviations stable by an adaptation of a maximum pressure policy. We show that this method is asymptotically optimal when the number of items that is processed and the processing speed increase. We illustrate these results through a simple example of a three stage re-entrant line.

Journal ArticleDOI
TL;DR: Through the case study, it is found that the inclusion of big clients in the first-level routing in the analysis leads to a better network design in terms of total logistic costs.
Abstract: In this study, we formulate and analyze a strategic design model for three-echelon distribution systems with two-level routing considerations. The key design decisions considered are: the number and locations of distribution centers (DC’s), which big clients (clients with larger demand) should be included in the first level routing (the routing between plants and DC’s), the first-level routing between plants, DC’s and big clients, and the second-level routing between DC’s and other clients not included in the first-level routing. A hybrid genetic algorithm embedded with a routing heuristic is developed to efficiently find near-optimal solutions. The quality of the solution to a series of small test problems is evaluated—by comparison with the optimal solution solved using LINGO 9.0. In test problems for which exact solutions are available, the heuristic solution is within 1% of optimal. At last, the model is applied to design a national finished goods distribution system for a Taiwan label-stock manufacturer. Through the case study, we find that the inclusion of big clients in the first-level routing in the analysis leads to a better network design in terms of total logistic costs.

Journal ArticleDOI
TL;DR: This work considers the optimal portfolio selection problem in a multiple period setting where the investor maximizes the expected utility of the terminal wealth in a stochastic market and determines various quantities of interest including its Fourier transform.
Abstract: We consider the optimal portfolio selection problem in a multiple period setting where the investor maximizes the expected utility of the terminal wealth in a stochastic market. The utility function has an exponential structure and the market states change according to a Markov chain. The states of the market describe the prevailing economic, financial, social and other conditions that affect the deterministic and probabilistic parameters of the model. This includes the distributions of the random asset returns as well as the utility function. The problem is solved using the dynamic programming approach to obtain the optimal solution and an explicit characterization of the optimal policy. We also discuss the stochastic structure of the wealth process under the optimal policy and determine various quantities of interest including its Fourier transform. The exponential return-risk frontier of the terminal wealth is shown to have a linear form. Special cases of multivariate normal and exponential returns are disussed together with a numerical illustration.

Journal ArticleDOI
TL;DR: This paper presents an urban rapid transit network design model, which consists of the location of train alignments and stations in an urban traffic context, and some algorithms based on Benders decomposition have been defined in order to solve the location problem.
Abstract: This paper presents an urban rapid transit network design model, which consists of the location of train alignments and stations in an urban traffic context. The design attempts to maximize the public transportation demand using the new infrastructure, considering a limited budget and number of transit lines. The location problem also incorporates the fact that users can choose their transportation mode and trips. In real cases, this problem is complex to solve because it has thousands of binary variables and constraints, and cannot be solved efficiently by Branch and Bound. For this reason, some algorithms based on Benders decomposition have been defined in order to solve it. These algorithms have been compared in test networks.

Journal ArticleDOI
TL;DR: In this paper the Fixed Charge Transportation Problem is considered and a new heuristic approach is proposed, based on the intensive use of Lagrangean relaxation techniques, which can obtain similar or better solutions than the state-of-art algorithms.
Abstract: In this paper the Fixed Charge Transportation Problem is considered. A new heuristic approach is proposed, based on the intensive use of Lagrangean relaxation techniques. The more novel aspects of this approach are new Lagrangean relaxation and decomposition methods, the consideration of several core problems, defined from the previously computed Lagrangean reduced costs, the heuristic selection of the most promising core problem and the final resort to enumeration by applying a branch and cut algorithm to the selected core problem. For problems with a small ratio of the average fixed cost to the average variable cost (lower than or equal to 25), the proposed method can obtain similar or better solutions than the state-of-art algorithms, such as the tabu search procedure and the parametric ghost image processes. For larger ratios (between 50 and 180), the quality of the obtained solutions could be considered to be halfway between both methods.

Journal ArticleDOI
TL;DR: The classical result of optimality at a destination or a source with majority weight, valid in a metric space, may be generalized to certain quasimetric spaces, and the majority theorem is further extended to Fermat–Weber problems with mixed asymmetric distances.
Abstract: The Fermat–Weber problem is considered with distance defined by a quasimetric, an asymmetric and possibly nondefinite generalisation of a metric. In such a situation a distinction has to be made between sources and destinations. We show how the classical result of optimality at a destination or a source with majority weight, valid in a metric space, may be generalized to certain quasimetric spaces. The notion of majority has however to be strengthened to supermajority, defined by way of a measure of the asymmetry of the distance, which should be finite. This extended majority theorem applies to most asymmetric distance measures previously studied in literature, since these have finite asymmetry measure. Perhaps the most important application of quasimetrics arises in semidirected networks, which may contain edges of different (possibly zero) length according to direction, or directed edges. Distance in a semidirected network does not necessarily have finite asymmetry measure. But it is shown that an adapted majority result holds nevertheless in this important context, thanks to an extension of the classical node-optimality result to semidirected networks with constraints. Finally the majority theorem is further extended to Fermat–Weber problems with mixed asymmetric distances.

Journal ArticleDOI
TL;DR: A neural network approach based on a self-organizing map is proposed for solving single-depot location-routing problems in the plane and it is shown that this approach competes well with different well-known heuristics.
Abstract: While in location planning it is often assumed that deliveries are made on a direct-trip basis, in fact deliveries, e.g., to the different supermarkets belonging to a specific chain or to retail outlets of any kind, usually are performed as round-trips. Therefore, it is often necessary to combine the two issues of locating a depot and of planning tours in one problem formulation.