scispace - formally typeset
Search or ask a question

Showing papers in "Omega-international Journal of Management Science in 2016"


Journal ArticleDOI
TL;DR: In this paper, the authors propose a non-linear minmax model to identify the weights such that the maximum absolute difference between the weight ratios and their corresponding comparisons is minimized, which may result in multiple optimal solutions.
Abstract: The Best Worst Method (BWM) is a multi-criteria decision-making method that uses two vectors of pairwise comparisons to determine the weights of criteria. First, the best (e.g. most desirable, most important), and the worst (e.g. least desirable, least important) criteria are identified by the decision-maker, after which the best criterion is compared to the other criteria, and the other criteria to the worst criterion. A non-linear minmax model is then used to identify the weights such that the maximum absolute difference between the weight ratios and their corresponding comparisons is minimized. The minmax model may result in multiple optimal solutions. Although, in some cases, decision-makers prefer to have multiple optimal solutions, in other cases they prefer to have a unique solution. The aim of this paper is twofold: firstly, we propose using interval analysis for the case of multiple optimal solutions, in which we show how the criteria can be weighed and ranked. Secondly, we propose a linear model for BWM, which is based on the same philosophy, but yields a unique solution.

1,005 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an empirical application and comparison of six different multiple criteria decision-making (MCDM) approaches for the purpose of assessing sustainable housing affordability, and evaluate the applicability of different MCDM methods for the focused decision problem.
Abstract: While affordability is traditionally assessed in economic terms, this paper tests a new assessment method that draws closer links with sustainability by considering economic, social and environmental criteria that impact on a household’s quality of life. The paper presents an empirical application and comparison of six different multiple criteria decision making (MCDM) approaches for the purpose of assessing sustainable housing affordability. The comparative performance of the weighted product model (WPM), the weighted sum model (WSM), the revised AHP, TOPSIS and COPRAS, is investigated. The purpose of the comparative analysis is to determine how different MCDM methods compare when used for a sustainable housing affordability assessment model. 20 Evaluative criteria and 10 alternative are as in Liverpool, England, were considered. The applicability of different MCDM methods for the focused decision problem was investigated. The paper discusses the similarities in MCDM methods, evaluates their robustness and contrasts the resulting rankings.

350 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used both a systematic literature search and co-citation analysis to investigate the specific research domains of organizational resilience and its strategic and operational management to understand the current state of development and future research directions.
Abstract: This article uses both a systematic literature search and co-citation analysis to investigate the specific research domains of organizational resilience and its strategic and operational management to understand the current state of development and future research directions. The research stream on the organizational and operational management of resilience is distant from its infancy, but it can still be considered to be in a developing phase. We found evidence that the academic literature has reached a shared consensus on the definition of resilience, foundations, and characteristics and that in recent years, the main subfield of research has been supply chain resilience. Nevertheless, the literature is still far from reaching consensus on the implementation of resilience, i.e., how to reach operational resilience and how to create and maintain resilient processes. Finally, based on the results of in-depth co-citation and literature analysis, we found seven fruitful future research directions on strategic, organizational and operational resilience.

315 citations


Journal ArticleDOI
TL;DR: This paper develops separate consistency and consensus processes to deal with HFLPR individual rationality and group rationality and introduces a possibility distribution approach and a 2-tuple linguistic model to aid the consistency improvement process in a given H FLPR.
Abstract: The use of hesitant information in pairwise comparisons enriches the flexibility of qualitative decision making and allows for hesitant fuzzy linguistic preference relation (HFLPR). This paper develops separate consistency and consensus processes to deal with HFLPR individual rationality and group rationality. First, a possibility distribution approach and a 2-tuple linguistic model are introduced as support tools. Then, a new consistency measure is defined and a convergent algorithm described to aid the consistency improvement process in a given HFLPR. The algorithm adopts a local revision strategy and can be easily interpreted. Further, a direct consensus reaching process is presented to solve the HFLPR consensus problems. A prominent characteristic of this consensus reaching process is that the feedback system is based directly on the consensus degrees, thereby reducing the proximity measure calculations. Finally, the proposed consistency and consensus processes are applied to an investment project selection problem. The results and an in-depth comparative analysis verify the potential use and effectiveness of the proposed methods.

290 citations


Journal ArticleDOI
TL;DR: This study applies a network clustering method to group the literature through a citation network established from the DEA literature over the period 2000 to 2014, and presents the research fronts, a coherent topic or issue addressed by a group of research articles in recent years.
Abstract: Research activities relating to data envelopment analysis (DEA) have grown at a fast rate recently. Exactly what activities have been carrying the research momentum forward is a question of particular interest to the research community. The purpose of this study is to find these research activities, or research fronts, in DEA. A research front refers to a coherent topic or issue addressed by a group of research articles in recent years. The large amount of DEA literature makes it difficult to use any traditional qualitative methodology to sort out the matter. Thus, this study applies a network clustering method to group the literature through a citation network established from the DEA literature over the period 2000 to 2014. The keywords of the articles in each discovered group help pinpoint its research focus. The four research fronts identified are “bootstrapping and two-stage analysis”, “undesirable factors”, “cross-efficiency and ranking”, and “network DEA, dynamic DEA, and SBM”. Each research front is then examined with key-route main path analysis to uncover the elements in its core. In addition to presenting the research fronts, this study also updates the main paths and author statistics of DEA development since its inception and compares them with those reported in a previous study.

217 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a quality-based price competition model for the WEEE recycling market in a dual channel environment comprising both formal and informal sectors and examined the equilibrium acquisition prices and effects of government subsidy in the two channels under four competitive scenarios.
Abstract: It is quite common to find both formal and informal sectors for processing waste electrical and electronic equipment (WEEE) in many emerging countries. Typically, the formal channel consists of recyclers with official qualifications for disassembling WEEE while the informal channel is dominated by unregulated recyclers. We develop a quality-based price competition model for the WEEE recycling market in a dual channel environment comprising both formal and informal sectors. The equilibrium acquisition prices and effects of government subsidy in the two channels are examined under four competitive scenarios. While government subsidy can support the formal sector, our analysis shows that at a higher quality level of waste, the marginal effect of subsidy is not as promising. When the quality of waste is high but the government subsidy is not substantial, the informal sector always has a competitive advantage. To promote the healthy development of the recycling industry the government should adjust the subsidy appropriately to limit the quality of waste at a high level suitable only for refurbishing in the informal sector. Our study also shows that both the formal and informal channels prefer high quality products. However, the informal recycler always has a better acquisition price to capture a bigger market share of used products than the formal recycler at the quality level of refurbishing for both recyclers. In a quality-pricing environment, as quality increases the acquisition prices in the two channels may crossover. This indicates that neither of the two channels always have a clear price advantage at all quality levels. We will not be able to obtain this result in a uniform pricing model. As such product quality is an important factor to consider in a competitive recycling market.

188 citations


Journal ArticleDOI
TL;DR: A common framework is proposed that helps comparing the literature review regarding cross-docking operations with on-field observations and platform managers and Analyzing the gaps between the state-of-the-art and the industry practice helps drawing future research directions, in relation to industrial needs.
Abstract: The technique of cross-docking, which consists in unloading trucks, sorting the items they contain and reloading them directly into outbound trucks in order to minimize temporary storage, has attracted researchers׳ attention in the past few years. The number of articles on the subject has been growing very fast, but largely detached from industry practice. In order to see whether the current state-of-the-art matches the industry practice, we propose a common framework that helps comparing the literature review regarding cross-docking operations with on-field observations and platform managers׳ interviews. Analyzing the gaps between the state-of-the-art and the industry practice helps drawing future research directions, in relation to industrial needs.

152 citations


Journal ArticleDOI
TL;DR: This work estimates the unsatisfied demand (lack of free lockers or lack of bicycles) at each station for a given time period in the future and for each possible number of bicycles at the beginning of the period.
Abstract: Public bike-sharing programs have been deployed in hundreds of cities worldwide, improving mobility in a socially equitable and environmentally sustainable way. However, the quality of the service is drastically affected by imbalances in the distribution of bicycles among stations. We address this problem in two stages. First, we estimate the unsatisfied demand (lack of free lockers or lack of bicycles) at each station for a given time period in the future and for each possible number of bicycles at the beginning of the period. In a second stage, we use these estimates to guide our redistribution algorithms. Computational results using real data from the bike-sharing system in Palma de Mallorca (Spain) are reported.

151 citations


Journal ArticleDOI
TL;DR: Experimental results show that, even for the large-scale Beijing–Shanghai high-speed railway, the CPLEX solver can efficiently produce the approximate optimal collaborative operation strategies within the given gaps in acceptable computational times, demonstrating the effectiveness and efficiency of the proposed approaches.
Abstract: Focusing on providing a modelling framework for train operation problems, this paper proposes a new collaborative optimization method for both train stop planning and train scheduling problems on the tactic level. Specifically, through embedding the train stop planning constraints into train scheduling process, we particularly consider the minimization of the total dwelling time and total delay between the real and expected departure times from origin station for all trains on a single-track high-speed railway corridor. Using the stop planning indicators as important decision variables, this problem is formally formulated as a multi-objective mixed integer linear programming model, and effectively handled through linear weighted methods. The theoretical analyses indicate that the formulated model is in essence a large-scale optimization model for the real-life applications. The optimization software GAMS with CPLEX solver is used to code the proposed model and then generate approximate optimal solutions. Two sets of numerical examples are implemented to show the performance of the proposed approaches. The experimental results show that, even for the large-scale Beijing–Shanghai high-speed railway, the CPLEX solver can efficiently produce the approximate optimal collaborative operation strategies within the given gaps in acceptable computational times, demonstrating the effectiveness and efficiency of the proposed approaches.

143 citations


Journal ArticleDOI
TL;DR: This study uses density estimates of consumption to derive prediction intervals of electricity cost for different time-of-use tariffs, and shows that a simple strategy of switching between different tariffs, based on a comparison of cost densities, delivers significant cost savings for the great majority of consumers.
Abstract: The recent advent of smart meters has led to large micro-level datasets. For the first time, the electricity consumption at individual sites is available on a near real-time basis. Efficient management of energy resources, electric utilities, and transmission grids, can be greatly facilitated by harnessing the potential of this data. The aim of this study is to generate probability density estimates for consumption recorded by individual smart meters. Such estimates can assist decision making by helping consumers identify and minimize their excess electricity usage, especially during peak times. For suppliers, these estimates can be used to devise innovative time-of-use pricing strategies aimed at their target consumers. We consider methods based on conditional kernel density (CKD) estimation with the incorporation of a decay parameter. The methods capture the seasonality in consumption, and enable a nonparametric estimation of its conditional density. Using 8 months of half-hourly data for 1000 meters we evaluate point and density forecasts, for lead times ranging from one half-hour up to a week ahead. We find that the kernel-based methods outperform a simple benchmark method that does not account for seasonality, and compare well with an exponential smoothing method that we use as a sophisticated benchmark. To gauge the financial impact, we use density estimates of consumption to derive prediction intervals of electricity cost for different time-of-use tariffs. We show that a simple strategy of switching between different tariffs, based on a comparison of cost densities, delivers significant cost savings for the great majority of consumers.

140 citations


Journal ArticleDOI
TL;DR: In this article, a DEA-based optimization model is employed to estimate the potential gains from implementing two carbon emissions trading schemes compared to carbon emissions command and control scheme in China, which provide one argument for implementing a market-based policy instrument instead of a command-and-control policy instrument on carbon emissions control in China.
Abstract: China has recently launched its pilot carbon emissions trading markets. Theoretically, heterogeneity in abatement cost determines the efficiency advantage of market based programs over command and control policies on carbon emissions. This study tries to answer the question that what will be the abatement cost savings or GDP loss recoveries from carbon emissions trading in China from the perspective of estimating the potential gains from carbon emissions trading. A DEA based optimization model is employed in this study to estimate the potential gains from implementing two carbon emissions trading schemes compared to carbon emissions command and control scheme in China. These two schemes are spatial tradable carbon emissions permit scheme and spatial–temporal tradable carbon emissions permit scheme. The associated three types of potential gains, which are defined as the potential increases on GDP outputs through eliminating technical inefficiency, eliminating suboptimal spatial allocation of carbon emissions permit, and eliminating both suboptimal spatial and temporal allocation of carbon emissions permit, are estimated by an ex post analysis for China and its 30 provinces over 2006-2010. Substantial abatement cost savings and considerable carbon emissions reduction potentials are identified in this study which provide one argument for implementing a market based policy instrument instead of a command and control policy instrument on carbon emissions control in China.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a dynamic two-stage slacks-based measure model to evaluate the efficiencies of Chinese banks, where non-performing loans can be treated as a carry-over variable, an undesirable output of the profitability stage in the previous period but an input to the profitability stages in the current period.
Abstract: Operational processes of banks in China can be divided into productivity and profitability stages. Within this, non-performing loans can be treated as a carry-over variable, an undesirable output of the profitability stage in the previous period but an input to the profitability stage in the current period. Using this framework, this paper proposes a dynamic two-stage slacks-based measure model to evaluate the efficiencies of Chinese banks. Based on the proposed model, the measures of stage, period and period stage efficiencies are defined. The proposed approach is applied to evaluate the operational efficiency of banks in China during 2008–2012. Key findings are that banks in China show both technical and scale inefficiency during 2008–2012, which results from the inefficiencies of both the productivity stage and profitability stage; city-owned commercial banks are more overall technically efficient than state-owned commercial banks and joint-stock commercial banks although state-owned commercial banks show best practice for pure technical efficiency, and city-owned commercial banks perform better than joint-stock commercial banks for pure technical efficiency.

Journal ArticleDOI
TL;DR: This paper provides a state-of-the-art literature review on staffing and scheduling approaches that account for nonstationary demand and develops recommendations for further research.
Abstract: Many service systems display nonstationary demand: the number of customers fluctuates over time according to a stochastic—though to some extent predictable—pattern. To safeguard the performance of such systems, adequate personnel capacity planning (i.e., determining appropriate staffing levels and/or shift schedules) is often crucial. This paper provides a state-of-the-art literature review on staffing and scheduling approaches that account for nonstationary demand. Among references published during 1991–2013, it is possible to categorize relevant contributions according to system assumptions, performance evaluation characteristics, optimization approaches and real-life application contexts. Based on their findings, the authors develop recommendations for further research.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the production performance of hospital services in Ontario (Canada), by investigating its key determinants, such as occupancy rate, rate of unit-producing personnel, outpatient-inpatient ratio, case-mix index, geographic locations, size and teaching status.
Abstract: In this work, we analyze production performance of hospital services in Ontario (Canada), by investigating its key determinants. Using data for the years 2003 and 2006, we follow the two-stage approach of Simar and Wilson (2007) [76]. Specifically, we use Data Envelopment Analysis (DEA) at the first stage to estimate efficiency scores and then use truncated regression estimation with double-bootstrap to test the significance of explanatory variables. We also examine distributions of efficiency across geographic locations, size and teaching status. We find that several organizational factors such as occupancy rate, rate of unit-producing personnel, outpatient–inpatient ratio, case-mix index, geographic locations, size and teaching status are significant determinants of efficiency.

Journal ArticleDOI
TL;DR: In this article, a joint dynamic pricing and preservation technology investment model for a deteriorating inventory system with time-and price sensitive demand and reference price effects is proposed to maximize the retailer's total profit over a finite planning horizon.
Abstract: Marketing and consumer behavior literature has empirically demonstrated that reference prices play a critical role in customer purchase decisions In this paper, we propose a joint dynamic pricing and preservation technology investment model for a deteriorating inventory system with time-and-price sensitive demand and reference price effects A generalized model is presented to jointly determine the optimal selling price, preservation technology investment and replenishment strategies that maximize the retailer's total profit over a finite planning horizon Beginning with mild assumptions, we derive theoretical results to demonstrate the existence of an optimal solution for the deteriorating inventory problem, and reveal the sensitivities of optimal pricing and preservation technology investment decisions to an initial reference price A simple iterative algorithm is then used to solve the proposed model by employing the theoretical results Numerical examples and sensitivity analysis are then provided to illustrate the features of the proposed model Finally, concluding remarks are offered

Journal ArticleDOI
TL;DR: In this paper, the authors investigated interactions among the different parties in a three-echelon closed-loop supply chain consisting of a single manufacturer, a single retailer and two recyclers and found that cooperative strategies can lead to win-win outcomes and increase an alliance's profit.
Abstract: The importance of closed-loop supply chains has been widely recognized in literature and in practice. The paper investigates interactions among the different parties in a three-echelon closed-loop supply chain consisting of a single manufacturer, a single retailer and two recyclers and focuses on how cooperative strategies affect closed-loop supply chain decision-making. Various cooperative models are considered by observing recent research and current cases, and the optimal decisions and supply chain profits of these models are discussed. By comparing various coalition structures, we discover that cooperative strategies can lead to win–win outcomes and increase an alliance׳s profit and can be effective ways of achieving greater efficiency from the point of view of the overall supply chain. Finally, the paper presents a detailed comparative analysis of these models and provides insights into the management of closed-loop supply chains.

Journal ArticleDOI
TL;DR: In this article, the authors developed a procedure that captures both quality and quantity of peer-reviewed journals, and applied it using a network DEA model to evaluate the research efficiency of most Australian universities.
Abstract: The motivation for this analysis is the recently developed Excellence in Research for Australia (ERA) program developed to assess the quality of research in Australia. The objective is to develop an appropriate empirical model that better represents the underlying production of higher education research. In general, past studies on university research performance have used standard DEA models with some quantifiable research outputs. However, these suffer from the twin maladies of an inappropriate production specification and a lack of consideration of the quality of output. By including the qualitative attributes of peer-reviewed journals, we develop a procedure that captures both quality and quantity, and apply it using a network DEA model. Our main finding is that standard DEA models tend to overstate the research efficiency of most Australian universities.

Journal ArticleDOI
TL;DR: The results obtained show that an economic design of acceptance sampling in such an integrated context can lead to important cost savings of more than 20%, compared with the 100% inspection policy.
Abstract: This paper considers the problem of integrated production, preventive maintenance and quality control for a stochastic production system subject to both reliability and quality deteriorations. A make-to-stock production strategy is used to provide protection to the serviceable stock against uncertainties. The quality control is performed using a single acceptance sampling plan by attributes. The preventive maintenance strategy consists in carrying out an imperfect maintenance as a part of the setup activity at the beginning of each lot production, while a major maintenance (overhaul) is undertaken once the proportion of defectives in a rejected lot reaches or exceeds a given threshold. The main objective of this study is to jointly optimize the production lot size, the inventory threshold, the sampling plan parameters and the overhaul threshold by minimizing the total incurred cost. To meet customer requirements, the optimization problem is subject to a specified constraint on the average outgoing quality limit (AOQL). A stochastic mathematical model is developed and solved using a simulation-based optimization approach. Numerical examples and thorough sensitivity analyses are provided to illustrate the efficiency of the proposed integrated model. Compared with the 100% inspection policy which is widely used in the literature on integrated production, maintenance and quality control, the results obtained show that an economic design of acceptance sampling in such an integrated context can lead to important cost savings of more than 20%.

Journal ArticleDOI
TL;DR: To handle interactions between criteria and hierarchical structure of criteria, the Choquet integral is applied as a preference model and the recently proposed methodology called Multiple Criteria Hierarchy Process is applied.
Abstract: The paper deals with two important issues of Multiple Criteria Decision Aiding: interaction between criteria and hierarchical structure of criteria. To handle interactions, we apply the Choquet integral as a preference model, and to handle the hierarchy of criteria, we apply the recently proposed methodology called Multiple Criteria Hierarchy Process. In addition to dealing with the above issues, we suppose that the preference information provided by the Decision Maker is indirect and has the form of pairwise comparisons of criteria with respect to their importance and pairwise preference comparisons of some pairs of alternatives with respect to some criteria. In consequence, many instances of the Choquet integral are usually compatible with this preference information. These instances are identified and exploited by Robust Ordinal Regression and Stochastic Multiobjective Acceptability Analysis. To illustrate the whole approach, we show its application to a real world decision problem concerning the ranking of universities for a hypothetical Decision Maker.

Journal ArticleDOI
TL;DR: In this paper, the efficiency assessment of general networks of processes that produce both desirable and undesirable outputs is addressed, and the slacks-based inefficiency (SBI) of each process is calculated.
Abstract: In this paper the efficiency assessment of general networks of processes that produce both desirable and undesirable outputs is addressed. This problem arises in many contexts (e.g. transportation, energy generation, etc.). A general networks slacks-based inefficiency (GNSBI) measure can be computed using a simple linear program that takes into account the weak disposability of the bad outputs. The slacks-based inefficiency (SBI) of each process is also calculated. Target values for all inputs, outputs (both desirable and undesirable) and even intermediate products are also provided. The proposed approach is rather general and can accommodate many different network topologies and returns to scale assumptions. Two applications to the banking sector are presented: one to assess banks efficiencies and another to assess bank branches.

Journal ArticleDOI
TL;DR: A direct comparison with the multiplicative decomposition approach on data drawn from the literature brings into light the advantages of the method and some critical points that one should be concerned about when using themultiplicative efficiency decomposition.
Abstract: We present in this paper a general network DEA approach to deal with efficiency assessments in multi-stage processes. Our approach complies with the composition paradigm, where the efficiencies of the stages are estimated first and the overall efficiency of the system is obtained ex post. We use multi-objective programming as modeling framework. This provides us the means to assess unique and unbiased efficiency scores and, if required, to drive the efficiency assessments effectively in line with specific priorities given to the stages. A direct comparison with the multiplicative decomposition approach on data drawn from the literature brings into light the advantages of our method and some critical points that one should be concerned about when using the multiplicative efficiency decomposition.

Journal ArticleDOI
TL;DR: In this article, the authors examine the limitations of the multi-stage DEA (data envelopment analysis) model in the literature and show that non-increasing weights can affect the evaluation of overall and stage efficiency scores.
Abstract: This paper examines limitations of the multi-stage DEA (data envelopment analysis) model in the literature. We focus on the DEA model with additive efficiency decomposition. We create taxonomy for the multi-stage DEA models and show when the decomposition weights can be non-increasing. When the decomposition weight for a stage is deemed reflective of the stage׳s relative importance, this property then implies that upstream stages (regardless the stage efficiency scores) in the model will obtain higher priority in efficiency decomposition. We also find that the non-increasing weights can affect the evaluation of overall and stage efficiency scores. We illustrate our findings through an empirical data set.

Journal ArticleDOI
TL;DR: In this article, a common framework for benchmarking and ranking decision-making units with DEA is proposed, which identifies a common best practice frontier as the facet of the DEA efficient frontier spanned by the technically efficient DMUs.
Abstract: This paper develops a common framework for benchmarking and ranking units with DEA. In many DEA applications, decision making units (DMUs) experience similar circumstances, so benchmarking analyses in those situations should identify common best practices in their management plans. We propose a DEA-based approach for the benchmarking to be used when there is no need (nor wish) to allow for individual circumstances of the DMUs. This approach identifies a common best practice frontier as the facet of the DEA efficient frontier spanned by the technically efficient DMUs in a common reference group. The common reference group is selected as that which provides the closest targets. A model is developed which allows us to deal not only with the setting of targets but also with the measurement of efficiency, because we can define efficiency scores of the DMUs by using the common set of weights (CSW) it provides. Since these weights are common to all the DMUs, the resulting efficiency scores can be used to derive a ranking of units. We discuss the existence of alternative optimal solutions for the CSW and find the range of possible rankings for each DMU which would result from considering all these alternate optima. These ranking ranges allow us to gain insight into the robustness of the rankings.

Journal ArticleDOI
TL;DR: The results of this novel data analytic approach, i.e. DEANN, proved that the accuracy of the ANN can be maintained while the size of the training dataset is significantly reduced, which validates the proposed method.
Abstract: The problem of effectively preprocessing a dataset containing a large number of performance metrics and an even larger number of records is crucial when utilizing an ANN. As such, this study proposes deploying DEA to preprocess the data to remove outliers and hence, preserve monotonicity as well as to reduce the size of the dataset used to train the ANN. The results of this novel data analytic approach, i.e. DEANN, proved that the accuracy of the ANN can be maintained while the size of the training dataset is significantly reduced. DEANN methodology is implemented via the problem of predicting the functional status of patients in organ transplant operations. The results yielded are very promising which validates the proposed method.

Journal ArticleDOI
TL;DR: This study proposes two matheuristics that can solve very hard and large NWFSPs to optimality, including the benchmark instances of Vallada et al. and a set of 2000-job and 20-machine problems.
Abstract: The no-wait flowshop scheduling problem (NWFSP) with makespan minimization is a well-known strongly NP-hard problem with applications in various industries. This study formulates this problem as an asymmetric traveling salesman problem, and proposes two matheuristics to solve it. The performance of each of the proposed matheuristics is compared with those of the best existing algorithms on 21 benchmark instances of Reeves and 120 benchmark instances of Taillard. Computational results show that the presented matheuristics outperform all existing algorithms. In particular, all tested instances of the problem, including a subset of 500-job and 20-machine test instances, are solved to optimality in an acceptable computational time. Moreover, the proposed matheuristics can solve very hard and large NWFSPs to optimality, including the benchmark instances of Vallada et al. and a set of 2000-job and 20-machine problems. Accordingly, this study provides a feasible means of solving the NP-hard NWFSP completely and effectively.

Journal ArticleDOI
TL;DR: In this paper, a bi-objective stochastic mixed integer programming approach for a joint selection of suppliers and scheduling of production and distribution in a multi-echelon supply chain subject to local and regional disruption risks is presented.
Abstract: This paper presents a bi-objective stochastic mixed integer programming approach for a joint selection of suppliers and scheduling of production and distribution in a multi-echelon supply chain subject to local and regional disruption risks. Two conflicting problem objectives are minimization of cost and maximization of service level. The three shipping methods are considered for distribution of products: batch shipping with a single shipment of different customer orders, batch shipping with multiple shipments of different customer orders and individual shipping of each customer order immediately after its completion. The stochastic combinatorial optimization problem is formulated as a time-indexed mixed integer program with the weighted-sum aggregation of the two objective functions. The supply portfolio is determined by binary selection and fractional allocation variables while time-indexed assignment variables determine the production and distribution schedules. The problem formulation incorporates supply–production, production–distribution and supply–distribution coordinating constraints to efficiently coordinate supply, production and distribution schedules. Numerical examples modelled after an electronics supply chain and computational results are presented and some managerial insights are reported. The findings indicate that for all shipping methods, the service-oriented supply portfolio is more diversified than the cost-oriented portfolio and the more cost-oriented decision-making, the more delayed the expected supply, production and distribution schedules.

Journal ArticleDOI
TL;DR: In this paper, a survey of approaches for the performance analysis of queueing systems with deterministic parameter changes over time is presented. But the authors focus on time-dependent changes in system parameters, such as the arrival rate or the number of servers.
Abstract: Many queueing systems are subject to time-dependent changes in system parameters, such as the arrival rate or number of servers. Examples include time-dependent call volumes and agents at inbound call centers, time-varying air traffic at airports, time-dependent truck arrival rates at seaports, and cyclic message volumes in computer systems. There are several approaches for the performance analysis of queueing systems with deterministic parameter changes over time. In this survey, we develop a classification scheme that groups these approaches according to their underlying key ideas into (i) numerical and analytical solutions, (ii) approaches based on models with piecewise constant parameters, and (iii) approaches based on modified system characteristics. Additionally, we identify links between the different approaches and provide a survey of applications that are categorized into service, road and air traffic, and IT systems.

Journal ArticleDOI
TL;DR: This work considers the Train Timetabling Problem in a railway node in which different Train Operators wish to run trains according to timetables that they propose, called ideal timetables, in the context of a highly congested railway node.
Abstract: We consider the Train Timetabling Problem (TTP) in a railway node (i.e. a set of stations in an urban area interconnected by tracks), which calls for determining the best schedule for a given set of trains during a given time horizon, while satisfying several track operational constraints. In particular, we consider the context of a highly congested railway node in which dierent Train Operators wish to run trains according to timetables that they propose, called ideal timetables. The ideal timetables altogether may be (and usually are) conicting, i.e. they do not respect one or more of the track operational constraints. The goal is to determine conict-free timetables that dier as little as possible from the ideal ones. The problem was studied for a research project funded by Rete Ferroviaria Italiana (RFI), the main Italian railway Infrastructure Manager, who also provided us with real-world instances. We present an Integer Linear Programming (ILP) model for the problem, which adapts previous ILP models from the literature to deal with the case of a railway node. The Linear Programming (LP) relaxation of the model is used to derive a dual bound. In addition, we propose an iterative heuristic algorithm that is able to obtain good solutions to real-world instances with up to 1500 trains in short computing times. The proposed algorithm is also used to evaluate the capacity saturation of the railway nodes.

Journal ArticleDOI
TL;DR: In this paper, the authors consider multiple criteria decision aided in the case of interaction between criteria and propose to use AHP on a set of reference points in the scale of each criterion and to use an interpolation to obtain the other values.
Abstract: We consider multiple criteria decision aiding in the case of interaction between criteria. In this case the usual weighted sum cannot be used to aggregate evaluations on different criteria and other value functions with a more complex formulation have to be considered. The Choquet integral is the most used technique and also the most widespread in the literature. However, the application of the Choquet integral presents two main problems being the necessity to determine the capacity, which is the function that assigns a weight not only to all single criteria but also to all subset of criteria, and the necessity to express on the same scale evaluations on different criteria. While with respect to the first problem we adopt the recently introduced Non-Additive Robust Ordinal Regression (NAROR) taking into account all the capacities compatible with the preference information provided by the DM, with respect to the second one we build the common scale for the considered criteria using the Analytic Hierarchy Process (AHP). We propose to use AHP on a set of reference points in the scale of each criterion and to use an interpolation to obtain the other values. This permits to reduce considerably the number of pairwise comparisons usually required by the DM when applying AHP. An illustrative example details the application of the proposed methodology.

Journal ArticleDOI
TL;DR: The r-TSALBP, a multiobjective model for assembly line balancing to search for the most robust line configurations when demand changes, is proposed and results show the improvements of using robustness information during the search and the outstanding behavior of the adaptive evolutionary algorithm for solving the problem.
Abstract: Changes in demand when manufacturing different products require an optimization model that includes robustness in its definition and methods to deal with it. In this work we propose the r-TSALBP, a multiobjective model for assembly line balancing to search for the most robust line configurations when demand changes. The robust model definition considers a set of demand scenarios and presents temporal and spatial overloads of the stations in the assembly line of the products to be assembled. We present two multiobjective evolutionary algorithms to deal with one of the r-TSALBP variants. The first algorithm uses an additional objective to evaluate the robustness of the solutions. The second algorithm employs a novel adaptive method to evolve separate populations of robust and non-robust solutions during the search. Results show the improvements of using robustness information during the search and the outstanding behavior of the adaptive evolutionary algorithm for solving the problem. Finally, we analyze the managerial impacts of considering the r-TSALBP model for the different organization departments by exploiting the values of the robustness metrics.