scispace - formally typeset

Showing papers in "European Journal of Operational Research in 2009"


Journal ArticleDOI
TL;DR: Basic features that facility location models must capture to support decision-making involved in strategic supply chain planning are identified and applications ranging across various industries are presented.
Abstract: Facility location decisions play a critical role in the strategic design of supply chain networks. In this paper, a literature review of facility location models in the context of supply chain management is given. We identify basic features that such models must capture to support decision-making involved in strategic supply chain planning. In particular, the integration of location decisions with other decisions relevant to the design of a supply chain network is discussed. Furthermore, aspects related to the structure of the supply chain network, including those specific to reverse logistics, are also addressed. Significant contributions to the current state-of-the-art are surveyed taking into account numerous factors. Supply chain performance measures and optimization techniques are also reviewed. Applications of facility location models to supply chain network design ranging across various industries are presented. Finally, a list of issues requiring further research are highlighted.

1,629 citations


Journal ArticleDOI
TL;DR: A sketch of some of the major research thrusts in data envelopment analysis (DEA) over the three decades since the appearance of the seminal work of Charnes et al. is provided.
Abstract: This paper provides a sketch of some of the major research thrusts in data envelopment analysis (DEA) over the three decades since the appearance of the seminal work of Charnes et al. (1978) [Charnes, A., Cooper, W.W., Rhodes, E.L., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The focus herein is primarily on methodological developments, and in no manner does the paper address the many excellent applications that have appeared during that period. Specifically, attention is primarily paid to (1) the various models for measuring efficiency, (2) approaches to incorporating restrictions on multipliers, (3) considerations regarding the status of variables, and (4) modeling of data variation.

1,256 citations


Journal ArticleDOI
TL;DR: A slacks-based network DEA model is proposed, called Network SBM, that can deal with intermediate products formally and evaluate divisional efficiencies along with the overall efficiency of decision making units (DMUs).
Abstract: Traditional DEA models deal with measurements of relative efficiency of DMUs regarding multiple-inputs vs. multiple-outputs. One of the drawbacks of these models is the neglect of intermediate products or linking activities. After pointing out needs for inclusion of them to DEA models, we propose a slacks-based network DEA model, called Network SBM, that can deal with intermediate products formally. Using this model we can evaluate divisional efficiencies along with the overall efficiency of decision making units (DMUs).

776 citations


Journal ArticleDOI
TL;DR: In this paper, the basic Kriging assumptions and formulas are presented and compared with classic linear regression metamodels and bootstrapping to estimate the variance of the kriging predictor.
Abstract: This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas—contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs for sensitivity analysis and optimization. It ends with topics for future research.

739 citations


Journal ArticleDOI
TL;DR: This paper review the main contributions in the field of production and distribution planning for agri-foods based on agricultural crops and focuses particularly on those models that have been successfully implemented.
Abstract: The supply chain of agricultural products has received a great deal of attention lately due to issues related to public health. Something that has become apparent is that in the near future the design and operation of agricultural supply chains will be subject to more stringent regulations and closer monitoring, in particular those for products destined for human consumption (agri-foods). This implies that the traditional supply chain practices may be subject to revision and change. One of the aspects that may be the subject of considerable scrutiny is the planning activities performed along the supply chains of agricultural products. In this paper, we review the main contributions in the field of production and distribution planning for agri-foods based on agricultural crops. We focus particularly on those models that have been successfully implemented. The models are classified according to relevant features, such as the optimization approaches used, the type of crops modeled and the scope of the plans, among many others. Through our analysis of the current state of the research, we diagnose some of the future requirements for modeling the supply chain of agri-foods.

684 citations


Journal ArticleDOI
TL;DR: Examination of second stage DEA efficiency analyses, within the context of a censoring data generating process (DGP) and a fractional data DGP, when efficiency scores are treated as descriptive measures of the relative performance of units in the sample suggests Tobit estimation in this situation is inappropriate.
Abstract: The paper examines second stage DEA efficiency analyses, within the context of a censoring data generating process (DGP) and a fractional data DGP, when efficiency scores are treated as descriptive measures of the relative performance of units in the sample. It is argued that the efficiency scores are not generated by a censoring process but are fractional data. Tobit estimation in this situation is inappropriate. In contrast, ordinary least squares is a consistent estimator, and, if White’s [White, H., 1980. A heteroskedastic-consistent covariance matrix and a direct test for heteroskedasticity. Econometrica 48, 817–838] heteroskedastic-consistent standard errors are calculated, large sample tests can be performed which are robust to heteroskedasticity and the distribution of the disturbances. For a more refined analysis Papke and Wooldridge’s [Papke, L.E., Wooldridge, J.M., 1996. Econometric methods for fractional response variables with an application to 401(k) plan participation rates. Journal of Applied Econometrics 11 (6), 619–632] method has some advantages, but is more complex and requires special programming.

639 citations


Journal ArticleDOI
TL;DR: The current paper develops an additive efficiency decomposition approach wherein the overall efficiency is expressed as a (weighted) sum of the efficiencies of the individual stages and can be applied under both CRS and variable returns to scale (VRS) assumptions.
Abstract: Kao and Hwang (2008) [Kao, C., Hwang, S.-N., 2008. Efficiency decomposition in two-stage data envelopment analysis: An application to non-life insurance companies in Taiwan. European Journal of Operational Research 185 (1), 418–429] develop a data envelopment analysis (DEA) approach for measuring efficiency of decision processes which can be divided into two stages. The first stage uses inputs to generate outputs which become the inputs to the second stage. The first stage outputs are referred to as intermediate measures. The second stage then uses these intermediate measures to produce outputs. Kao and Huang represent the efficiency of the overall process as the product of the efficiencies of the two stages. A major limitation of this model is its applicability to only constant returns to scale (CRS) situations. The current paper develops an additive efficiency decomposition approach wherein the overall efficiency is expressed as a (weighted) sum of the efficiencies of the individual stages. This approach can be applied under both CRS and variable returns to scale (VRS) assumptions. The case of Taiwanese non-life insurance companies is revisited using this newly developed approach.

484 citations


Journal ArticleDOI
TL;DR: A comprehensive explanation of the current state of the art in AS/RS design is provided for a range of issues such as system configuration, travel time estimation, storage assignment, dwell-point location, and request sequencing.
Abstract: Automated Storage and Retrieval Systems (AS/RSs) are warehousing systems that are used for the storage and retrieval of products in both distribution and production environments. This paper provides an overview of literature from the past 30 years. A comprehensive explanation of the current state of the art in AS/RS design is provided for a range of issues such as system configuration, travel time estimation, storage assignment, dwell-point location, and request sequencing. The majority of the reviewed models and solution methods are applicable to static scheduling and design problems only. Requirements for AS/RSs are, however, increasingly of a more dynamic nature for which new models will need to be developed to overcome large computation times and finite planning horizons, and to improve system performance. Several other avenues for future research in the design and control of AS/RSs are also specified.

463 citations


Journal ArticleDOI
TL;DR: This article gives an overview over the various models and methods used to predict future load demands and their applications in the electricity sector.
Abstract: For decision makers in the electricity sector, the decision process is complex with several different levels that have to be taken into consideration. These comprise for instance the planning of facilities and an optimal day-to-day operation of the power plant. These decisions address widely different time-horizons and aspects of the system. For accomplishing these tasks load forecasts are very important. Therefore, finding an appropriate approach and model is at core of the decision process. Due to the deregulation of energy markets, load forecasting has gained even more importance. In this article, we give an overview over the various models and methods used to predict future load demands.

442 citations


Journal ArticleDOI
TL;DR: This work surveys complexity results for the min-max and min- max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min s-t cut, knapsack, and investigates the approximability of these problems.
Abstract: Min–max and min–max regret criteria are commonly used to define robust solutions. After motivating the use of these criteria, we present general results. Then, we survey complexity results for the min–max and min–max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min s–t cut, knapsack. Since most of these problems are NP-hard, we also investigate the approximability of these problems. Furthermore, we present algorithms to solve these problems to optimality.

418 citations


Journal ArticleDOI
TL;DR: This paper reviews and discusses the three major planning approaches presented in the literature, mixed-model sequencing, car sequencing and level scheduling, and provides a hierarchical classification scheme to systematically record the academic efforts in each field and to deduce future research issues.
Abstract: Manufacturers in a wide range of industries nowadays face the challenge of providing a rich product variety at a very low cost. This typically requires the implementation of cost efficient, flexible production systems. Often, so called mixed-model assembly lines are employed, where setup operations are reduced to such an extent that various models of a common base product can be manufactured in intermixed sequences. However, the observed diversity of mixed-model lines makes a thorough sequence planning essential for exploiting the benefits of assembly line production. This paper reviews and discusses the three major planning approaches presented in the literature, mixed-model sequencing, car sequencing and level scheduling, and provides a hierarchical classification scheme to systematically record the academic efforts in each field and to deduce future research issues.

Journal ArticleDOI
TL;DR: An improved ant colony optimization (IACO) is proposed, which possesses a new strategy to update the increased pheromone, called ant-weight strategy, and a mutation operation, to solve VRP.
Abstract: The vehicle routing problem (VRP), a well-known combinatorial optimization problem, holds a central place in logistics management. This paper proposes an improved ant colony optimization (IACO), which possesses a new strategy to update the increased pheromone, called ant-weight strategy, and a mutation operation, to solve VRP. The computational results for fourteen benchmark problems are reported and compared to those of other metaheuristic approaches.

Journal ArticleDOI
TL;DR: This paper builds a relational network DEA model, taking into account the interrelationship of the processes within the system, to measure the efficiency of the system and those of the process at the same time, and decomposes the system efficiency into the sum of the inefficiency slacks of its component processes connected in parallel.
Abstract: Traditional studies in data envelopment analysis (DEA) view systems as a whole when measuring the efficiency, ignoring the operation of individual processes within a system. This paper builds a relational network DEA model, taking into account the interrelationship of the processes within the system, to measure the efficiency of the system and those of the processes at the same time. The system efficiency thus measured more properly represents the aggregate performance of the component processes. By introducing dummy processes, the original network system can be transformed into a series system where each stage in the series is of a parallel structure. Based on these series and parallel structures, the efficiency of the system is decomposed into the product of the efficiencies of the stages in the series and the inefficiency slack of each stage into the sum of the inefficiency slacks of its component processes connected in parallel. With efficiency decomposition, the process which causes the inefficient operation of the system can be identified for future improvement. An example of the non-life insurance industry in Taiwan illustrates the whole idea.

Journal ArticleDOI
TL;DR: The paper concludes that assessing scenarios with PMCA is resource intense, but this methodology captures successfully the context of technology deployment and allows decision-making based on a robust and democratic process, which addresses uncertainties, acknowledges multiple legitimate perspectives and encourages social learning.
Abstract: This paper analyses the combined use of scenario building and participatory multi-criteria analysis (PMCA) in the context of renewable energy from a methodological point of view. Scenarios have been applied increasingly in decision-making about long-term consequences by projecting different possible pathways into the future. Scenario analysis accounts for a higher degree of complexity inherent in systems than the study of individual projects or technologies. MCA is a widely used appraisal method, which assesses options on the basis of a multi-dimensional criteria framework and calculates rankings of options. In our study, five renewable energy scenarios for Austria for 2020 were appraised against 17 sustainability criteria. A similar process was undertaken on the local level, where four renewable energy scenarios were developed and evaluated against 15 criteria. On both levels, the scenario development consisted of two stages: first an exploratory stage with stakeholder engagement and second a modelling stage with forecasting-type scenarios. Thus, the scenarios consist of a narrative part (storyline) and a modeled quantitative part. The preferences of national and local energy stakeholders were included in the form of criteria weights derived from interviews and participatory group processes, respectively. Especially in the case of renewable energy promotion in Austria, the paper systematically analyses the potentials and limitations of the methodology (1) for capturing the complexity of decision-making about the long-term consequences of changes in socio-economic and biophysical systems and (2) for appraising energy futures. The paper concludes that assessing scenarios with PMCA is resource intense, but this methodology captures successfully the context of technology deployment and allows decision-making based on a robust and democratic process, which addresses uncertainties, acknowledges multiple legitimate perspectives and encourages social learning.

Journal ArticleDOI
TL;DR: Experimental results show that by using current hospital resources, the optimization simulation model generates optimal staffing allocation that would allow 28% increase in patient throughput and an average of 40% reduction in patients' waiting time.
Abstract: This paper integrates simulation with optimization to design a decision support tool for the operation of an emergency department unit at a governmental hospital in Kuwait. The hospital provides a set of services for different categories of patients. We present a methodology that uses system simulation combined with optimization to determine the optimal number of doctors, lab technicians and nurses required to maximize patient throughput and to reduce patient time in the system subject to budget restrictions. The major objective of this decision supporting tool is to evaluate the impact of various staffing levels on service efficiency. Experimental results show that by using current hospital resources, the optimization simulation model generates optimal staffing allocation that would allow 28% increase in patient throughput and an average of 40% reduction in patients’ waiting time.

Journal ArticleDOI
TL;DR: This article attempts to do an objective evaluation of the performance of decision making units (DMUs) by proposing six DEA-based performance evaluation models based on a research sample of the Chinese coal-fired power plants and finding not only contributes for the performance measurement methodology, but also has policy implications for the Chinese power sector.
Abstract: There are two difficulties in doing an objective evaluation of the performance of decision making units (DMUs). The first one is how to treat undesirable outputs jointly produced with the desirable outputs, and the second one is how to treat uncontrollable variables, which often capture the impact of the operating environment. Given difficulties in both model construction and data availability, very few published papers simultaneously consider the above two problems. This article attempts to do so by proposing six DEA-based performance evaluation models based on a research sample of the Chinese coal-fired power plants. The finding of this paper not only contributes for the performance measurement methodology, but also has policy implications for the Chinese coal-fired power sector.

Journal ArticleDOI
TL;DR: It is shown in this paper that the Pareto front of bi-objective problems can be efficiently generated with the [epsilon]-constraint method and heuristics based on information gathered from previous subproblems that significantly speed up the method are described.
Abstract: This paper describes an exact ϵ -constraint method for bi-objective combinatorial optimization problems with integer objective values. This method tackles multi-objective optimization problems by solving a series of single objective subproblems, where all but one objectives are transformed into constraints. We show in this paper that the Pareto front of bi-objective problems can be efficiently generated with the ϵ -constraint method. Furthermore, we describe heuristics based on information gathered from previous subproblems that significantly speed up the method. This approach is used to find the exact Pareto front of the Traveling Salesman Problem with Profits, a variant of the Traveling Salesman Problem in which a profit or prize value is associated with each vertex. The goal here is to visit a subset of vertices while addressing two conflicting objectives: maximize the collected prize and minimize the travel costs. We report the first exact results for this problem on instances derived from classical Vehicle Routing and Traveling Salesman Problem instances with up to 150 vertices. Results on approximations of the Pareto front obtained from a variant of our exact algorithm are also reported.

Journal ArticleDOI
TL;DR: The current literature on the overall methodology of warehouse design is explored, together with the literature on tools and techniques used for specific areas of analysis, to assist further research into the development of a more comprehensive methodology for warehouse design.
Abstract: In spite of the importance of warehousing to the customer service and cost levels of many businesses, there is currently not a comprehensive systematic method for designing warehouses. In this paper, the current literature on the overall methodology of warehouse design is explored, together with the literature on tools and techniques used for specific areas of analysis. The general results from the literature have then been validated and refined with reference to warehouse design companies. The output is a general framework of steps, with specific tools and techniques that can be used for each step. This is intended to be of value to practitioners and to assist further research into the development of a more comprehensive methodology for warehouse design.

Journal ArticleDOI
TL;DR: It is shown that slight changes of the proposed VNS procedure is also competitive for the Periodic Traveling Salesman Problem (PTSP), and even outperforms existing solution procedures proposed in the literature.
Abstract: The aim of this paper is to propose a new heuristic for the Periodic Vehicle Routing Problem (PVRP) without time windows. The PVRP extends the classical Vehicle Routing Problem (VRP) to a planning horizon of several days. Each customer requires a certain number of visits within this time horizon while there is some flexibility on the exact days of the visits. Hence, one has to choose the visit days for each customer and to solve a VRP for each day. Our method is based on Variable Neighborhood Search (VNS). Computational results are presented, that show that our approach is competitive and even outperforms existing solution procedures proposed in the literature. Also considered is the special case of a single vehicle, i.e. the Periodic Traveling Salesman Problem (PTSP). It is shown that slight changes of the proposed VNS procedure is also competitive for the PTSP.

Journal ArticleDOI
TL;DR: This paper addresses channel coordination by seeking optimal cooperative advertising strategies and equilibrium pricing in a two-member distribution channel and identifies the feasible solutions to a bargaining problem where the channel members can determine how to divide the extra profits.
Abstract: Cooperative advertising is a practice that a manufacturer pays retailers a portion of the local advertising cost in order to induce sales. Cooperative advertising plays a significant role in marketing programs of channel members. Nevertheless, most studies to date on cooperative advertising have assumed that the market demand is only influenced by advertising expenditures but not by retail price. This paper addresses channel coordination by seeking optimal cooperative advertising strategies and equilibrium pricing in a two-member distribution channel. We establish and compare two models: a non-cooperative, leader–follower game and a cooperative game. We develop propositions and insights from the comparison of these models. The cooperative model achieves better coordination by generating higher channel-wide profits than the non-cooperative model with these features: (a) the retailer price is lower to consumers; and (b) the advertising efforts are higher for all channel members. We identify the feasible solutions to a bargaining problem where the channel members can determine how to divide the extra profits.

Journal ArticleDOI
TL;DR: This paper proposes a three-step approach for evacuation planning and explains that the last step, which corresponds to distribution of evacuees into the safe areas, is a spatial multiobjective optimization problem (MOP), because the objective functions and data required for solving the problem has a spatial component.
Abstract: In an emergency situation, evacuation is conducted in order to displace people from a dangerous place to a safer place, and it usually needs to be done in a hurry. It is necessary to prepare evacuation plans in order to have a good response in an emergency situation. A central challenge in developing an evacuation plan is in determining the distribution of evacuees into the safe areas, that is, deciding where and from which road each evacuee should go. To achieve this aim, several objective functions should be brought into consideration and need to be satisfied simultaneously, though these objective functions may often conflict with each other. This paper aims to address the use of multiobjective evolutionary algorithms (MOEA) and the geographical information system (GIS) for evacuation planning. The paper proposes a three-step approach for evacuation planning. It explains that the last step, which corresponds to distribution of evacuees into the safe areas, is a spatial multiobjective optimization problem (MOP), because the objective functions and data required for solving the problem has a spatial component. To solve the MOP, two objective functions are defined, different algorithms for solving the problem are investigated, and the proper algorithm is selected. Finally, in the context of a case study project and based on the proposed approach and algorithm, evacuation planning is conducted in a GIS environment, and the results are tested. This paper is based on an ongoing research project in Iran.

Journal ArticleDOI
TL;DR: The optimal pricing strategies when a product is sold on two channels such as the Internet and a traditional channel are studied and the behavior under different parameters and consumer preferences for the alternative channels are explored.
Abstract: In this paper we study the optimal pricing strategies when a product is sold on two channels such as the Internet and a traditional channel. We assume a stylized deterministic demand model where the demand on a channel depends on prices, degree of substitution across channels and the overall market potential. We first study four prevalent pricing strategies which differ in the degree of autonomy for the Internet channel. For a monopoly, we provide theoretical bounds for these pricing strategies. We also analyze the duopoly case where an incumbent mixed retailer faces competition with a pure retailer and characterize price equilibria. Finally, through a computational study, we explore the behavior (price and profits) under different parameters and consumer preferences for the alternative channels.

Journal ArticleDOI
TL;DR: Results show that Monte Carlo cost-to-go estimation reduces computation time 65% in large instances with little or no loss in solution quality, and compares results to the perfect information case from solving exact a posteriori solutions for sampled vehicle routing problems.
Abstract: This paper examines approximate dynamic programming algorithms for the single-vehicle routing problem with stochastic demands from a dynamic or reoptimization perspective. The methods extend the rollout algorithm by implementing different base sequences (i.e. a priori solutions), look-ahead policies, and pruning schemes. The paper also considers computing the cost-to-go with Monte Carlo simulation in addition to direct approaches. The best new method found is a two-step lookahead rollout started with a stochastic base sequence. The routing cost is about 4.8% less than the one-step rollout algorithm started with a deterministic sequence. Results also show that Monte Carlo cost-to-go estimation reduces computation time 65% in large instances with little or no loss in solution quality. Moreover, the paper compares results to the perfect information case from solving exact a posteriori solutions for sampled vehicle routing problems. The confidence interval for the overall mean difference is (3.56%, 4.11%).

Journal ArticleDOI
TL;DR: An integer programming (IP) formulation for optimal route assignment is presented, which utilizes M/G/c/c state dependent queueing models to cope with congestion and time delays on road links.
Abstract: In this paper, the optimal design and analysis of evacuation routes in transportation networks is examined. An methodology for optimal egress route assignment is suggested. An integer programming (IP) formulation for optimal route assignment is presented, which utilizes M/G/c/c state dependent queueing models to cope with congestion and time delays on road links. M/G/c/c simulation software is used to evaluate performance measures of the evacuation plan: clearance time, total travelled distance and blocking probabilities. Extensive experimental results are included.

Journal ArticleDOI
TL;DR: Several seller-buyer supply chain models are proposed which incorporate both cost factors as well as elements of competition and cooperation between seller and buyer.
Abstract: In this paper, several seller-buyer supply chain models are proposed which incorporate both cost factors as well as elements of competition and cooperation between seller and buyer. We assume that unit marketing expenditure and unit price charged by the buyer influence the demand of the product being sold. The relationships between seller and buyer will be modeled by non-cooperative and cooperative games, respectively. The non-cooperative game is based on the Stackelberg strategy solution concept, where we consider separately the case when the seller is the leader (Seller-Stackelberg) and also when the buyer is the leader (Buyer-Stackelberg). Pareto efficient solutions will be provided for the cooperative game model. Numerical examples presented in this paper, including sensitivity analysis of some key parameters, will compare the results between different models considered.

Journal ArticleDOI
TL;DR: An inventory model with general ramp type demand rate, time dependent (Weibull) deterioration rate and partial backlogging of unsatisfied demand is considered and the optimal replenishment policy for the model is derived.
Abstract: In this paper, an inventory model with general ramp type demand rate, time dependent (Weibull) deterioration rate and partial backlogging of unsatisfied demand is considered. The model is studied under the following different replenishment policies: (a) starting with no shortages and (b) starting with shortages. The model is fairly general as the demand rate, up to the time point of its stabilization, is a general function of time. The backlogging rate is any non-increasing function of the waiting time up to the next replenishment. The optimal replenishment policy for the model is derived for both the above mentioned policies.

Journal ArticleDOI
TL;DR: A supply chain design problem modeled as a sequence of splitting and combining processes, where the first-stage decisions are strategic location decisions, whereas the second stage consists of operational decisions.
Abstract: We present a supply chain design problem modeled as a sequence of splitting and combining processes. We formulate the problem as a two-stage stochastic program. The first-stage decisions are strategic location decisions, whereas the second stage consists of operational decisions. The objective is to minimize the sum of investment costs and expected costs of operating the supply chain. In particular the model emphasizes the importance of operational flexibility when making strategic decisions. For that reason short-term uncertainty is considered as well as long-term uncertainty. The real-world case used to illustrate the model is from the Norwegian meat industry. We solve the problem by sample average approximation in combination with dual decomposition. Computational results are presented for different sample sizes and different levels of data aggregation in the second stage.

Journal ArticleDOI
TL;DR: An evaluation study of residential properties carried out together with real estate agents in the city of Volta Redonda, Brazil aimed to define a reference value for the rents of these properties using the TODIM method of Multicriteria Decision Aiding.
Abstract: This article presents an evaluation study of residential properties carried out together with real estate agents in the city of Volta Redonda, Brazil. The study aimed to define a reference value for the rents of these properties using the TODIM method of Multicriteria Decision Aiding. By applying this method to the ordering of properties with different characteristics, a ranking of all the properties was obtained and, as a result of this, diverse ranges of rental values for the properties under analysis. The study was complemented by an analysis of the sensitivity of the numerical results obtained.

Journal ArticleDOI
TL;DR: A parallel DEA model is developed which takes the operation of individual components into account in calculating the efficiency of the system, and the efficiency calculated is smaller than that calculated from the conventional DEA model.
Abstract: In the real world there are systems which are composed of independent production units. The conventional data envelopment analysis (DEA) model uses the sum of the respective inputs and outputs of all component units of a system to calculate its efficiency. This paper develops a parallel DEA model which takes the operation of individual components into account in calculating the efficiency of the system. A property owned by this parallel model is that the inefficiency slack of the system can be decomposed into the inefficiency slacks of its component units. This helps the decision maker identify inefficient components and make subsequent improvements. Another property is that the efficiency calculated from this model is smaller than that calculated from the conventional DEA model. Few systems will have perfect efficiency score; consequently, a stronger discrimination power is gained. In addition to theoretical derivations, a case of the national forests of Taiwan is used as an example to illustrate the whole idea.

Journal ArticleDOI
TL;DR: The main result is the configuration of three collection networks within Mexico, which correspond to three possible scenarios that consider 100%, 90% and 75%, respectively, of collection coverage.
Abstract: This paper seeks to describe several features of establishing a closed-loop supply chain for the collection of End-of-Life Vehicles in Mexico. To address this task, the problem is handled through Reverse Logistics and is modelled through an Uncapacitated Facility Location Problem. The solution of this model is obtained using software SITATION©. Furthermore, this work also presents a brief description of the current Mexican ELV management system and the future trends in ELV generation in Mexico. The main result is the configuration of three collection networks within Mexico, which correspond to three possible scenarios that consider 100%, 90% and 75%, respectively, of collection coverage. Regions with high ELV generation are identified as well as relevant factors affecting total costs in the reverse supply chain.