scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 2018"


Journal ArticleDOI
TL;DR: The paper extends the state-of-the-art literature by proposing a pioneering roadmap to enhance the application of CE principles in organisations by means of Industry 4.0 and CE principles based on the most relevant management theories.
Abstract: This work makes a case for the integration of the increasingly popular and largely separate topics of Industry 4.0 and the circular economy (CE). The paper extends the state-of-the-art literature by proposing a pioneering roadmap to enhance the application of CE principles in organisations by means of Industry 4.0 approaches. Advanced and digital manufacturing technologies are able to unlock the circularity of resources within supply chains; however, the connection between CE and Industry 4.0 has not so far been explored. This article therefore contributes to the literature by unveiling how different Industry 4.0 technologies could underpin CE strategies, and to organisations by addressing those technologies as a basis for sustainable operations management decision-making. The main results of this work are: (a) a discussion on the mutually beneficial relationship between Industry 4.0 and the CE; (b) an in-depth understanding of the potential contributions of smart production technologies to the ReSOLVE model of CE business models; (c) a research agenda for future studies on the integration between Industry 4.0 and CE principles based on the most relevant management theories.

612 citations


Journal ArticleDOI
TL;DR: The feasibility, reliability, and stability of existing theories and methodologies should be thoroughly validated before they can be successfully applied to evaluate environmental performance in practice and provide scientific basis and guidance to formulate environmental protection policies.
Abstract: Traditional theories and methods for comprehensive environmental performance evaluation are challenged by the appearance of big data because of its large quantity, high velocity, and high diversity, even though big data is defective in accuracy and stability. In this paper, we first review the literature on environmental performance evaluation, including evaluation theories, the methods of data envelopment analysis, and the technologies and applications of life cycle assessment and the ecological footprint. Then, we present the theories and technologies regarding big data and the opportunities and applications for these in related areas, followed by a discussion on problems and challenges. The latest advances in environmental management based on big data technologies are summarized. Finally, conclusions are put forward that the feasibility, reliability, and stability of existing theories and methodologies should be thoroughly validated before they can be successfully applied to evaluate environmental performance in practice and provide scientific basis and guidance to formulate environmental protection policies.

228 citations


Journal ArticleDOI
TL;DR: This paper reviews the literature on ‘Big Data and supply chain management (SCM)’, dating back to 2006 and provides a thorough insight into the field by using the techniques of bibliometric and network analyses.
Abstract: As Big Data has undergone a transition from being an emerging topic to a growing research area, it has become necessary to classify the different types of research and examine the general trends of this research area. This should allow the potential research areas that for future investigation to be identified. This paper reviews the literature on ‘Big Data and supply chain management (SCM)’, dating back to 2006 and provides a thorough insight into the field by using the techniques of bibliometric and network analyses. We evaluate 286 articles published in the past 10 years and identify the top contributing authors, countries and key research topics. Furthermore, we obtain and compare the most influential works based on citations and PageRank. Finally, we identify and propose six research clusters in which scholars could be encouraged to expand Big Data research in SCM. We contribute to the literature on Big Data by discussing the challenges of current research, but more importantly, by identifying and proposing these six research clusters and future research directions. Finally, we offer to managers different schools of thought to enable them to harness the benefits from using Big Data and analytics for SCM in their everyday work.

219 citations


Journal ArticleDOI
TL;DR: A hybrid model of Fuzzy AHP and FBuzzy TOPSIS is proposed in this paper for the selection of an appropriate 3PL in order to outsource logistics activities of perishable products.
Abstract: Managing value chain of perishable food items or pharmaceutical drugs is known as cold chain management. In India, approximately 30% fruits and vegetables get wasted due to lack of effective cold chain management. Logistic providers play a crucial role in making cold chains more effective. Based on literature review, ten criteria are selected for the third party logistics (3PL) selection process. Some of these criteria are transportation and warehousing cost, logistic infrastructure and warehousing facilities, customer service and reliability, network management, etc. This study illustrates a hybrid approach for selection of 3 PL for cold chain management under fuzzy environment. A hybrid model of Fuzzy AHP and Fuzzy TOPSIS is proposed in this paper for the selection of an appropriate 3PL in order to outsource logistics activities of perishable products. Fuzzy AHP is used to rank different criteria for 3PL selection, then Fuzzy TOPSIS is used to select the best 3 PL based on performance. The results imply that logistic providers should focus on practices such as automation of processes and innovation in cold chain processes to become more competitive.

137 citations


Journal ArticleDOI
TL;DR: This paper examines both applied and scholarly applications of OR-based big data analytical tools and techniques within an operations and supply chain management context to highlight their future potential in this domain.
Abstract: Few topics have generated more discourse in recent years than big data analytics. Given their knowledge of analytical and mathematical methods, operations research (OR) scholars would seem well poised to take a lead role in this discussion. Unfortunately, some have suggested there is a misalignment between the work of OR scholars and the needs of practicing managers, especially those in the field of operations and supply chain management where data-driven decision-making is a key component of most job descriptions. In this paper, we attempt to address this misalignment. We examine both applied and scholarly applications of OR-based big data analytical tools and techniques within an operations and supply chain management context to highlight their future potential in this domain. This paper contributes by providing suggestions for scholars, educators, and practitioners that aid to illustrate how OR can be instrumental in solving big data analytics problems in support of operations and supply chain management.

136 citations


Journal ArticleDOI
TL;DR: The process of TISM is outlined and the guidelines and thumb rules are provided to check the correctness of Tism at each step to help future modellers to translate their ill-structured mental models into sound theoretical models.
Abstract: Interpretive structural modelling (ISM) has been further interpreted in the form of total interpretive structural modelling (TISM). These are graphical models that represent the hierarchical relationships and help in better and precise conceptualization and theory building. ISM only interprets the nodes in a digraph, but TISM interprets both nodes and links. The errors observed in applications of ISM and TISM reported in past have acted as motivation for this paper to provide checks and guidelines for correctness of total interpretive structural models. The paper first gives an overview of past applications of TISM. The process of TISM is first outlined and then the guidelines and thumb rules are provided to check the correctness of TISM at each step. Some typical errors in TISM models and their modifications are discussed to help future modellers to translate their ill-structured mental models into sound theoretical models. A discussion on usefulness of TISM for big data analytics for theory building is provided and future directions of research are outlined.

129 citations


Journal ArticleDOI
TL;DR: This is an updated version of the paper “Large-scale Unit Commitment under uncertainty: a literature survey” that appeared in 4OR 13(2):115–171 (2015); this version has over 170 more citations, proving how fast the literature on uncertain Unit Commitments evolves, and therefore the interest in this subject.
Abstract: The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper “Large-scale Unit Commitment under uncertainty: a literature survey” that appeared in 4OR 13(2):115–171 (2015); this version has over 170 more citations, most of which appeared in the last 3 years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject.

111 citations


Journal ArticleDOI
TL;DR: An empirical analysis using real-word data from a major P2P lending platform in China shows that the proposed default prediction method can improve loan default prediction performance compared with existing methods based only on hard information.
Abstract: Predicting whether a borrower will default on a loan is of significant concern to platforms and investors in online peer-to-peer (P2P) lending. Because the data types online platforms use are complex and involve unstructured information such as text, which is difficult to quantify and analyze, loan default prediction faces new challenges in P2P. To this end, we propose a default prediction method for P2P lending combined with soft information related to textual description. We introduce a topic model to extract valuable features from the descriptive text concerning loans and construct four default prediction models to demonstrate the performance of these features for default prediction. Moreover, a two-stage method is designed to select an effective feature set containing both soft and hard information. An empirical analysis using real-word data from a major P2P lending platform in China shows that the proposed method can improve loan default prediction performance compared with existing methods based only on hard information.

110 citations


Journal ArticleDOI
TL;DR: In this paper, a new completion method for incomplete pairwise comparison matrix (iPCM) is proposed, which provides a new perspective to estimate the missing values in iPCMs with explicit physical meaning, which is straightforward and flexible.
Abstract: Pairwise comparison matrix (PCM) as a crucial component of Analytic Hierarchy Process (AHP) presents the preference relations among alternatives. However, in many cases, the PCM is difficult to be completed, which obstructs the subsequent operations of the classical AHP. In this paper, based on decision-making and trial evaluation laboratory (DEMATEL) method which has ability to derive the total relation matrix from direct relation matrix, a new completion method for incomplete pairwise comparison matrix (iPCM) is proposed. The proposed method provides a new perspective to estimate the missing values in iPCMs with explicit physical meaning, which is straightforward and flexible. Several experiments are implemented as well to present the completion ability of the proposed method and some insights into the proposed method and matrix consistency.

110 citations


Journal ArticleDOI
TL;DR: A customer involvement approach is introduced as a new means of coming up with customer-centred new product development at an electronics company and reveals that big data can offer customer involvement so as to provide valuable input for developing new products.
Abstract: This study explores how big data can be used to enable customers to express unrecognised needs. By acquiring this information, managers can gain opportunities to develop customer-centred products. Big data can be defined as multimedia-rich and interactive low-cost information resulting from mass communication. It offers customers a better understanding of new products and provides new, simplified modes of large-scale interaction between customers and firms. Although previous studies have pointed out that firms can better understand customers’ preferences and needs by leveraging different types of available data, the situation is evolving, with increasing application of big data analytics for product development, operations and supply chain management. In order to utilise the customer information available from big data to a larger extent, managers need to identify how to establish a customer-involving environment that encourages customers to share their ideas with managers, contribute their know-how, fiddle around with new products, and express their actual preferences. We investigate a new product development project at an electronics company, STE, and describe how big data is used to connect to, interact with and involve customers in new product development in practice. Our findings reveal that big data can offer customer involvement so as to provide valuable input for developing new products. In this paper, we introduce a customer involvement approach as a new means of coming up with customer-centred new product development.

98 citations


Journal ArticleDOI
TL;DR: The main purpose of the paper is to investigate the optimal retailer’s replenishment decisions for deteriorating items including time-dependent demand for demonstrating more practical circumstances within economic-order quantity frameworks.
Abstract: In this paper, a deterministic inventory control model with deterioration is developed. Here, the deterioration rate follows stochastic deterioration, especially Weibull distribution deterioration. A time-dependent demand approach is introduced to show the applicability of our proposed model and to be up-to-date with respect to time. The main purpose of the paper is to investigate the optimal retailer’s replenishment decisions for deteriorating items including time-dependent demand for demonstrating more practical circumstances within economic-order quantity frameworks. Keeping in mind the criterion of modern era, we consider that the holding cost is totally dependent on time, and shortages are allowed for this model. Subject to the formulated model, we minimize the total inventory cost. The mathematical model is explored by numerical examples to validate the proposed model. A sensitivity analysis of the optimal solution with regard to important parameters is also carried out to elaborate the quality, e.g., stability, of our result and to possibly modify our model. The paper ends with a conclusion and an outlook to future studies.

Journal ArticleDOI
TL;DR: This research develops a resource dependence model connecting big data analytics to superior humanitarian outcomes by means of a case study (qualitative) of twelve humanitarian value streams and generalizes RDT assumptions from the multi-tiered supply chains to distributed networks.
Abstract: Humanitarian operations in developing world settings present a particularly rich opportunity for examining the use of big data analytics. Focal non-governmental organizations (NGOs) often synchronize the delivery of services in a supply chain fashion by aligning recipient community needs with resources from various stakeholders (nodes). In this research, we develop a resource dependence model connecting big data analytics to superior humanitarian outcomes by means of a case study (qualitative) of twelve humanitarian value streams. Specifically, we identify the nodes in the network that can exert power on the focal NGOs based upon the respective resources being provided to ensure that sufficient big data is being created. In addition, we are able to identify how the type of data attribute, i.e., volume, velocity, veracity, value, and variety, relates to different forms of humanitarian interventions (e.g., education, healthcare, land reform, disaster relief, etc.). Finally, we identify how the various types of data attributes affect humanitarian outcomes in terms of deliverables, lead-times, cost, and propagation. This research presents evidence of important linkages between the developmental body of knowledge and that of resource dependence theory (RDT) and big data analytics. In addition, we are able to generalize RDT assumptions from the multi-tiered supply chains to distributed networks. The prescriptive nature of the findings can be used by donor agencies and focal NGOs to design interventions and collect the necessary data to facilitate superior humanitarian outcomes.

Journal ArticleDOI
TL;DR: Results of this review paper indicated that data envelopment analysis showed great promise to be a good evaluative tool for future evaluation on supply chain management, where the production function between the inputs and outputs was virtually absent or extremely difficult to acquire.
Abstract: Supply chain management aims to designing, managing and coordinating material/product, information and financial flows to fulfill the customer requirements at low costs and thereby increasing supply chain profitability. In the last decades, data envelopment analysis has become the main topic of interest as a mathematical tool to evaluate supply chain management. While, various data envelopment analysis models have been suggested to measure and evaluate the supply chain management, there is a lack of research regarding to systematic literature review and classification of study in this field. Regarding this, some major databases including Web of Science and Scopus have been nominated and systematic and meta-analysis method which called “PRISMA” has been proposed. Accordingly, a review of 75 published articles appearing in 35 scholarly international journals and conferences between 1996 and 2016 have been attained to reach a comprehensive review of data envelopment analysis models in evaluation supply chain management. Consequently, the selected published articles have been categorized by author name, the year of publication, technique, application area, country, scope, data envelopment analysis purpose, study purpose, research gap and contribution, results and outcome, and journals and conferences in which they appeared. The results of this study indicated that areas of supplier selection, supply chain efficiency and sustainable supply chain have had the highest frequently than other areas. In addition, results of this review paper indicated that data envelopment analysis showed great promise to be a good evaluative tool for future evaluation on supply chain management, where the production function between the inputs and outputs was virtually absent or extremely difficult to acquire. The facility of multiple inputs and multiple outputs of the data envelopment analysis model was definitely an attractive one to most researchers and, therefore, the data envelopment analysis procedure had found many applications beyond supply chain management into organization and industry.

Journal ArticleDOI
TL;DR: Twitter data is utilised to develop waste minimization strategies by backtracking the supply chain for beef supply chain and the proposed model is generic enough and can be applied to other domains as well.
Abstract: Approximately one third of the food produced is discarded or lost, which accounts for 1.3 billion tons per annum. The waste is being generated throughout the supply chain viz. farmers, wholesalers/processors, logistics, retailers and consumers. The majority of waste occurs at the interface of retailers and consumers. Many global retailers are making efforts to extract intelligence from customer’s complaints left at retail store to backtrack their supply chain to mitigate the waste. However, majority of the customers don’t leave the complaints in the store because of various reasons like inconvenience, lack of time, distance, ignorance etc. In current digital world, consumers are active on social media and express their sentiments, thoughts, and opinions about a particular product freely. For example, on an average, 45,000 tweets are tweeted daily related to beef products to express their likes and dislikes. These tweets are large in volume, scattered and unstructured in nature. In this study, twitter data is utilised to develop waste minimization strategies by backtracking the supply chain. The execution process of proposed framework is demonstrated for beef supply chain. The proposed model is generic enough and can be applied to other domains as well.

Journal ArticleDOI
TL;DR: It is demonstrated that reciprocal preferences and CLA significantly affect the equilibrium and firms’ profits and utilities and the supply chain efficiency decreases with CLA.
Abstract: The traditional self-interest hypothesis is far from perfect. Social preference has a significant impact on every firm’s decision making. This paper incorporates reciprocal preferences and consumers’ low-carbon awareness (CLA) into the dyadic supply chain in which a single manufacturer plays a Stackelberg-like game with a single retailer. This research intends to investigate how reciprocity and CLA may affect the decisions and performances of the supply chain members and the system’s efficiency. In this study, the following two scenarios are discussed: (1) both the manufacturer and the retailer have no reciprocal preferences and (2) both of them have reciprocal preferences. We derive equilibriums under both scenarios and present a numerical analysis. We demonstrate that reciprocal preferences and CLA significantly affect the equilibrium and firms’ profits and utilities. First, the optimal retail price increases with CLA, while it decreases with the reciprocity of the retailer and the manufacturer; the optimal wholesale price increases with CLA and the retailer’s reciprocity, while it decreases with the manufacturer’s reciprocity. The optimal emission reduction level increases with CLA and the reciprocity of both the manufacturer and the retailer. Second, the optimal profits of the participants and the supply chain increase with CLA, the participants’ optimal profits are concave in their own reciprocity and increase with their co-operators’ reciprocity. Third, the participants’ optimal utilities increase with CLA and their reciprocity. Finally, the supply chain efficiency increases with the participants’ reciprocity, while the efficiency decreases with CLA.

Journal ArticleDOI
TL;DR: The purpose is to survey the main problems and methods arising in this field of shared mobility systems, covering several planning levels, from strategic to operational ones, such as station location, station sizing, rebalancing routes.
Abstract: Transportation habits have been significantly modified in the past decade by the introduction of shared mobility systems. These have emerged as a partial response to the need of resorting to green means of transportation and to the desire of being more flexible in the choice of trips, both from a spatial and a temporal point of view. On the one hand, shared mobility systems have taken advantage of the interest of riders for shared experiences. On the other hand, their success has been possible as a result of the recent advances in information and communications technology. The operational research community is already very active in this emerging field, which provides a very rich source of new and interesting challenges, covering several planning levels, from strategic to operational ones, such as station location, station sizing, rebalancing routes. A fascinating feature of this field is the variety of the methods used to deal with these questions. Our purpose is to survey the main problems and methods arising in this field.


Journal ArticleDOI
TL;DR: It is argued that the integrated GRA–MOGLP approach provides an effective tool for the evaluation and optimisation of complex sustainable electricity generation planning, particularly promising in dealing with uncertainty and imprecisions, which reflect real-life scenarios in planning processes.
Abstract: Sustainable energy generation is a key feature in sustainable development and among various sources of energy electricity due to some unique characteristics seems particularly important Optimising electricity generation mix is a highly complex task and requires consideration of numerous conflicting criteria To deal with uncertainty of experts’ opinions, inaccuracy of the available data and including more factors, some of which are difficult to quantify, in particular for environmental and social criteria, we applied grey relational analysis (GRA) with grey linguistic, and grey interval values to obtain the rank of each system Then the obtained ranking were used as coefficients for a multi objective decision making problem, aimed to minimize the cost, import dependencies and emissions as well as maximizing the share of generation sources with better ranking Due to existence of interval variables multi objective grey linear programming (MOGLP) method was used to solve the problem Our results for the UK as a case study suggest increased role for all low carbon energy technologies and sharp reduction in the use of coal and oil We argue that the integrated GRA–MOGLP approach provides an effective tool for the evaluation and optimisation of complex sustainable electricity generation planning It is particularly promising in dealing with uncertainty and imprecisions, which reflect real-life scenarios in planning processes

Journal ArticleDOI
TL;DR: A detailed literature review of over 180 papers about different threats, their consequences pertinent to the maritime industry, and a discussion on various risk assessment models and computational algorithms are provided.
Abstract: Due to the undesirable implications of maritime mishaps such as ship collisions and the consequent damages to maritime property; the safety and security of waterways, ports and other maritime assets are of the utmost importance to authorities and researches. Terrorist attacks, piracy, accidents and environmental damages are some of the concerns. This paper provides a detailed literature review of over 180 papers about different threats, their consequences pertinent to the maritime industry, and a discussion on various risk assessment models and computational algorithms. The methods are then categorized into three main groups: statistical, simulation and optimization models. Corresponding statistics of papers based on year of publication, region of case studies and methodology are also presented.

Journal ArticleDOI
TL;DR: An efficient method to visualize the demand distributional characteristics of the fast fashion market, found that big data streams of customer reviews contain useful information for better sales nowcasting; and discussed the current influence pattern of sentiment on sales are developed.
Abstract: Proliferation of online social media and the phenomenal growth of online commerce have brought to us the era of big data. Before this availability of data, models of demand distribution at the product level proved elusive due to the ever shorter product life cycle. Methods of sales forecast are often conceived in terms of longer-run trends based on weekly, monthly or even quarterly data, even in markets with rapidly changing customer demand such as the fast fashion industry. Yet short-run models of demand distribution and sales forecasting (aka. nowcast) are arguably more useful for managers as the majority of their decisions are concerned with day to day discretionary spending and operations. Observations in the fast fashion market were acquired, for a collection time frame of about 1 month, from a major Chinese e-commerce platform at granular, half-daily intervals. We developed an efficient method to visualize the demand distributional characteristics; found that big data streams of customer reviews contain useful information for better sales nowcasting; and discussed the current influence pattern of sentiment on sales. We expect our results to contribute to practical visualization of the demand structure at the product level where the number of products is high and the product life cycle is short; revealing big data streams as a source for better sales nowcasting at the corporate and product level; and better understanding of the influence of online sentiment on sales. Managers may thus make better decisions concerning inventory management, capacity utilization, and lead and lag times in supply-chain operations.

Journal ArticleDOI
TL;DR: The results of the Granger causality tests prove that a systemic risk measure is a great alternative tool for monitoring early warning signals of distress in the real economy.
Abstract: This paper studies the exposure and contribution of financial institutions to systemic risks in financial markets. We employ three popular indicators of a financial institution’s exposure to systemic risks: the systemic risk index (SRISK) and marginal expected shortfall (MES) of Brownlees and Engle (Volatility, correlation and tails for systemic risk measurement, Social Science Research Network, Rochester, NY, 2012) and the conditional Value-at-Risk (CoVaR) of Adrian and Brunnermeier (2011). We use a primary database of Taiwan financial institutions for our empirical study. A panel contains data of stock market returns and balance sheets of 31 Taiwan financial institutions for 2005–2014. We focus on systemic risk analysis so as to understand the dynamics of volatility, interdependency, and risk during the recent financial crisis. We then report the time series dynamics and cross sectional rankings of these systemic risk measures. The main results indicate that although these three measures differ in their definition of the contributions to systemic risk, all are quite similar in identifying systemically important financial institutions (SIFIs). Moreover, we find empirical evidence that systemic risk contributions are closely related to certain institution characteristic factors. The results of the Granger causality tests prove that a systemic risk measure is a great alternative tool for monitoring early warning signals of distress in the real economy.

Journal ArticleDOI
TL;DR: Computational results show that it is always beneficial in integrated system for the members of the chain as the demand is uncertain in nature and the retailers face shortages.
Abstract: The paper studies a two-echelon supply chain comprising of one manufacturer and two competing retailers with sales price dependent demand and random arrival of the customers. The manufacturer acts as the supplier who specifies wholesale price for the retailers and the retailers compete with each other announcing different sales prices. We analyse a single-period newsvendor type model to determine the optimal order quantity, considering the competing retailers’ strategies.The unsold items at the retailers are buyback to the manufacturer at less price than the sales prices.On the other hand, the retailers face shortages as the demand is uncertain in nature. The profit functions of manufacturer and two retailers are analyzed and compared following Stakelberg, Bertrand, Cournot–Bertrand and integrated approaches. Moreover, distribution-free model is analyzed for integrated profit of the chain. A numerical example is given to illustrate the theoretical results developed in each case. Computational results show that it is always beneficial in integrated system for the members of the chain.

Journal ArticleDOI
TL;DR: This paper aims to achieve this by systematically reviewing the existing body of knowledge to categorize and evaluate the reported studies on healthcare operations and data mining frameworks and the outcome is useful as a reference for the practitioners and as a research platform for the academia.
Abstract: With the widespread use of healthcare information systems commonly known as electronic health records, there is significant scope for improving the way healthcare is delivered by resorting to the power of big data. This has made data mining and predictive analytics an important tool for healthcare decision making. The literature has reported attempts for knowledge discovery from the big data to improve the delivery of healthcare services, however, there appears no attempt for assessing and synthesizing the available information on how the big data phenomenon has contributed to better outcomes for the delivery of healthcare services. This paper aims to achieve this by systematically reviewing the existing body of knowledge to categorize and evaluate the reported studies on healthcare operations and data mining frameworks. The outcome of this study is useful as a reference for the practitioners and as a research platform for the academia.

Journal ArticleDOI
TL;DR: The slacks-based measure (SBM) model is extended considering undesirable outputs and the variable returns to scale (VRS) assumption for environmental efficiency evaluation of the DMUs and the proposed approach is applied to do environmental efficiency analysis of transportation systems.
Abstract: In the big data context, decision makers usually face the problem of evaluating environmental efficiencies of a massive number of decision making units (DMUs) using the data envelopment analysis (DEA) method. However, standard implementations of the traditional DEA calculation process will consume much time when the data set is very large. To eliminate this limitation of DEA applied to big data, firstly, the slacks-based measure (SBM) model is extended considering undesirable outputs and the variable returns to scale (VRS) assumption for environmental efficiency evaluation of the DMUs. Then, an approach comprised of two algorithms is proposed for environmental efficiency evaluation when the number of DMUs is massive. The set of DMUs is partitioned into subsets, a technique which facilitates the application of a parallel computing mechanism. Algorithm 1 can be used for identifying the environment efficient DMUs in any DMU set. Further, Algorithm 2 (a parallel computing algorithm) shows how to use the proposed model and Algorithm 1 in parallel to find the environmental efficiencies of all DMUs. A simulation shows that the parallel computing design helps to significantly reduce calculation time when completing environmental efficiency evaluation tasks with large data sets, compared with using the traditional calculation processes. Finally, the proposed approach is applied to do environmental efficiency analysis of transportation systems.

Journal ArticleDOI
TL;DR: This paper explores how DCM can perform better in the electronic commerce environment based on studying website behavior data and using data analytics tools, and shows that DCM performs much better when paired with the benefits of electronic commerce and Big Data than traditional SCM methods.
Abstract: With the advent of the Internet and the flourishing of connected technology, electronic commerce has become a new business model that disrupts the traditional transactional model and is transforming the consumer’s lifestyle. Electronic commerce leads to constantly changing customer needs, therefore quick action and collaboration between production and the market is essential. Meanwhile, the abundant transactional data generated by electronic commerce allows us to explore browsing behaviors, habits, preferences and even characteristics of customers, which can help companies to understand their customer’s needs more clearly. Traditional supply chain management (SCM) simply cannot keep up with electronic commerce because demand forecasts are constantly changing. Customer demands create and affect the whole supply chain. The purpose of SCM is to satisfy the customers who support the company by paying for the products; so meeting changing customer needs should be incorporated into SCM by developing demand chain management (DCM). In this paper, we explore how DCM can perform better in the electronic commerce environment based on studying website behavior data and using data analytics tools. The results show that DCM performs much better when paired with the benefits of electronic commerce and Big Data than traditional SCM methods.

Journal ArticleDOI
TL;DR: Empirical tests show that sentiments over topics together with other quantitative features can more accurately predict sales volume when compared with using quantitative features alone.
Abstract: In the era of big data, huge number of product reviews has been posted to online social media. Accordingly, mining consumers’ sentiments about products can generate valuable business intelligence for enhancing management’s decision-making. The main contribution of our research is the design of a novel methodology that extracts consumers’ sentiments over topics of product reviews (i.e., product aspects) to enhance sales predicting performance. In particular, consumers’ daily sentiments embedded in the online reviews over latent topics are extracted through the joint sentiment topic model. Finally, the sentiment distributions together with other quantitative features are applied to predict sales volume of the following period. Based on a case study conducted in one the largest e-commerce companies in China, our empirical tests show that sentiments over topics together with other quantitative features can more accurately predict sales volume when compared with using quantitative features alone.

Journal ArticleDOI
TL;DR: This paper proposes a new DEA-based analysis framework with a regression-based feedback mechanism, where regression analysis provides DEA with feedback that informs about the relevance of the inputs and the outputs chosen by the analyst.
Abstract: Data envelopment analysis (DEA) has witnessed increasing popularity in banking studies since 1985. In this paper, we propose a new DEA-based analysis framework with a regression-based feedback mechanism, where regression analysis provides DEA with feedback that informs about the relevance of the inputs and the outputs chosen by the analyst. Unlike previous studies, the DEA models used within the proposed framework could use both inputs and outputs, only inputs, or only outputs. So far, the UK banking sector remains relatively under researched despite its crucial importance to the UK economy. We use the proposed framework to address several research questions related to both the efficiency of the UK commercial banking sector and DEA analyses with and without regression-based feedback. Empirical results suggest that, on average, the commercial banks operating in the UK—whether domestic or foreign—are yet to achieve acceptable levels of overall technical efficiency, pure technical efficiency, and scale efficiency. On the other hand, DEA analyses with and without a linear regression-based feedback mechanism seem to provide consistent findings; however, in general DEA analyses without feedback tend to over- or under-estimate efficiency scores depending on the orientation of the analyses. Furthermore, in general, a linear regression-based feedback mechanism proves effective at improving discrimination in DEA analyses unless the initial choice of inputs and outputs is well informed.

Journal ArticleDOI
TL;DR: This analysis shows that the combined application of the EU-ETS at the manufacturers’ tier and the carbon tax on truck transport implies additional costs for producers that reduce their good provisions, which has a positive outcome for the environment.
Abstract: Global climate change has encouraged international and regional adoption of pollution taxes and carbon emission reduction policies. Europe has taken the leadership in environmental regulations by introducing the European Union Emissions Trading System (EU-ETS) in 2005 and by promoting a set of policies destined to lower carbon emissions from energy, industrial, and transport sectors. These environmental policies have significantly affected the production choices of these European sectors. Considering this framework, the objective of this paper is to evaluate the effects of the application of environmental policies in a multitiered closed-loop supply chain network where raw material suppliers, manufacturers, consumers, and recovery centers operate. In particular, we assume that manufacturers are subject to the EU-ETS and a carbon tax is imposed on truck transport. In this way, the developed model captures carbon emission regulations, recycling, transportation and technological factors within a unified framework. In particular, it allows for evaluating the impacts of the considered environmental regulations on carbon emissions, product flows, and prices. The proposed model is optimized and solved by using the theory of variational inequalities. Our analysis shows that the combined application of the EU-ETS at the manufacturers’ tier and the carbon tax on truck transport implies additional costs for producers that reduce their good provisions. On the other side, this has a positive outcome for the environment since $$\hbox {CO}_2$$ emissions reduce. Moreover, an increase of the efficiency level of the recycling process increments the availability of reusable raw material in the reverse supply chain. Finally, the distance between a couple of CLSC tiers plays a very important role. The lower is the distance covered by vehicles, the higher is the production of goods and the lower is the amount of $$\hbox {CO}_2$$ emitted.

Journal ArticleDOI
TL;DR: The model is solved analytically and results indicate that optimal order size and sample size are intrinsically linked and maximize the total profit.
Abstract: To ensure all products as perfect, inspection is essential, even though it is not possible to inspect all products after producing them like some special type products as plastic joint for the water pipe. In this direction, this paper develops an inventory model with lot inspection policy. With the help of lot inspection, all products need not to be verified still the retailer can decide the quality of products during inspection. If retailer founds products as imperfect quality, the products are sent back to supplier. As it is lot inspection, mis-clarification errors (Type-I error and Type-II error) are introduced to model the problem. Two possible cases are discussed for sending back products as defective lots are immediately withdrawn from the system and send back to supplier with retailer’s payment and for second case, retailer sends defective products during receiving next lot from supplier with supplier’s investment, like in food industry or in hygiene product industry. The model is solved analytically and results indicate that optimal order size and sample size are intrinsically linked and maximize the total profit. Numerical examples, graphical representations, and sensitivity analysis are given to illustrate the model. The results suggest that sending defective products maintaining the first case is the more profitable than the second case.

Journal ArticleDOI
TL;DR: A novel BDA is proposed to contribute towards this void, using a fuzzy cognitive map (FCM) approach that will enhance decision-making thus prioritising IT service procurement in the public sector.
Abstract: The prevalence of big data is starting to spread across the public and private sectors however, an impediment to its widespread adoption orientates around a lack of appropriate big data analytics (BDA) and resulting skills to exploit the full potential of big data availability. In this paper, we propose a novel BDA to contribute towards this void, using a fuzzy cognitive map (FCM) approach that will enhance decision-making thus prioritising IT service procurement in the public sector. This is achieved through the development of decision models that capture the strengths of both data analytics and the established intuitive qualitative approach. By taking advantages of both data analytics and FCM, the proposed approach captures the strength of data-driven decision-making and intuitive model-driven decision modelling. This approach is then validated through a decision-making case regarding IT service procurement in public sector, which is the fundamental step of IT infrastructure supply for publics in a regional government in the Russia federation. The analysis result for the given decision-making problem is then evaluated by decision makers and e-government expertise to confirm the applicability of the proposed BDA. In doing so, demonstrating the value of this approach in contributing towards robust public decision-making regarding IT service procurement.