scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research and Decisions in 2017"


Posted ContentDOI
TL;DR: The properties of the proposed EWMA control chart, including the average run lengths, are given for under-in control and out-of-control processes.
Abstract: In this manuscript, a new EWMA control chart is proposed under repetitive sampling when athe quantitative quality characteristic follows the exponential distribution. The properties of the proposed chart, including the average run lengths, are given for under-in control and out-of-control processes. The performance of the proposed chart is compared with two existing control charts with the help of simulatedion data. AnThe application of the proposed chart is illustratedgiven using a healthcare data set.

13 citations


Posted ContentDOI
TL;DR: In this article, the problem of designing a single-product supply chain network in an agile manufacturing setting under a vendor-managed inventory (VMI) strategy to seize a new market opportunity is addressed.
Abstract: This paper introduces the problem of designing a single-product supply chain network in an agile manufacturing setting under a Vendor Managed Inventory (VMI) strategy to seize a new market opportunity. The problem addresses the level of risk aversion of the retailer when dealing with the uncertainty of market related information through a Conditional Value at Risk (CVaR) approach. This approach leads to a bi-level programming problem. The Karush-Kuhn-Tucker (KKT) conditions are employed to transform the model into a single-level, mixed-integer linear programming problem by considering some relaxations. Since realizations of imprecisely known parameters are the only information available, a data-driven approach is employed as a suitable, more practical, methodology of avoiding distributional assumptions. Finally, the effectiveness of the proposed model is demonstrated through a numerical example.

13 citations


Posted ContentDOI
TL;DR: This paper uses (linear) membership functions to fuzzily describe objective functions, as well as the controlling factors, and generates satisfactory solutions to a multi-level supplier selection problem with uncertain or fuzzy demand and supply.
Abstract: Supplier selection plays a vital role in evolving an effective supply chain and the overall performance of organisations. Choosing suppliers may involve different levels arranged in a hierarchical structure. Decisions are made successively starting from the first level to the last level. Decision variables are partitioned between different levels and are called controlling factors. In this paper, we propose a multi-level supplier selection problem with uncertain or fuzzy demand and supply. Since objectives may be conflicting in nature, possible relaxations in the form of tolerances are provided by the upper level decision makers to avoid decision deadlocks. We use (linear) membership functions to fuzzily describe objective functions, as well as the controlling factors, and generate satisfactory solutions. We extend and present an approach to solving multi-level decision making problems when fuzzy constraints are employed. Different scenarios are constructed within a numerical illustration, based on the selection of controlling factors by the upper level decision makers.

11 citations


Posted ContentDOI
TL;DR: In this paper, the best models of bankruptcy prediction that can indicate the deteriorating situation of a company several years before bankruptcy occurs were selected. But only 5 models were characterized by sufficient predictive ability in the five years before the bankruptcy of enterprises.
Abstract: The objective of this paper is to select the best models of bankruptcy prediction that can indicate the deteriorating situation of a company several years before bankruptcy occurs. There are a lot of methods for evaluating the financial statements of enterprises, but only a few can assess a company as a whole and recognise sufficiently early the deteriorating financial standing of a business. The matrix method was used to classify companies in order to assess the models. The correctness of the classification made by the models was tested based on data covering a period of five years before the bankruptcy of enterprises. To analyse the effectiveness of these discriminant models, the financial reports of manufacturing companies were used. Analysis of 33 models of bankruptcy prediction shows that only 5 models were characterized by sufficient predictive ability in the five years before the bankruptcy of enterprises. The results obtained show that so far a unique, accurate, optimal model, by which companies could be assessed with very high efficiency, has not been identified. That is why it is vital to continue research related to the construction of models enabling accurate evaluation of the financial condition of businesses. Matrix method was used for classifying companies in order to verify the models. The correctness of classification of models was tested in the period of five years before the bankruptcy of enterprises. To analyze the effectiveness of the discriminant models the financial reports of manufacturing companies were used. The analysis of 33 bankruptcy prediction models shows that only 5 models were characterized by sufficient predictive ability in the five years before the bankruptcy of enterprises. Obtained results have shown that so far a unique, perfect, the best model, by which companies could be estimated with very high efficiency, has not been established. That is why, it is vital to continue research related to the building of models enabling evaluation of the financial condition of businesses.

10 citations


Posted ContentDOI
TL;DR: In this paper, the authors assess and analyse selected liquidity/illiquidity measures derived from high-frequency intraday data from the Warsaw Stock Exchange (WSE) and provide an analysis of the obtained results with respect to the whole sample and three consecutive sub-samples, each of equal size.
Abstract: The aim of this study is to assess and analyse selected liquidity/illiquidity measures derived from high-frequency intraday data from the Warsaw Stock Exchange (WSE). As the side initiating a trade cannot be directly identified from a raw data set, firstly the Lee and Ready [1991] algorithm for inferring the initiator of a trade is employed to distinguish between so-called buyer- and seller-initiated trades. Intraday data for fifty-three WSE-listed companies divided into three size groups cover the period from January 3, 2005 to June 30, 2015. Moreover, the paper provides an analysis of the robustness of the obtained results with respect to the whole sample and three consecutive sub-samples, each of equal size: covering the pre-crisis, crisis, and post-crisis periods. The empirical results turn out to be robust to the choice of the period. Furthermore, hypotheses concerning the statistical significance of coefficients of correlation between the daily values of three liquidity proxies used in the study are tested.

8 citations


Posted ContentDOI
TL;DR: In this article, the incremental benefit of product switch options in steel plant projects is valued through Monte Carlo simulation and modeling the prices and demand of steel products as a Geometric Brownian Motion (GBM).
Abstract: In the steel industry, which is subject to a significant volatility in its output prices and market demand for different ranges of products, the production diversification can generate important value for switch real options. Therefore investments in different assets are commonly made, generating the possibility of production diversification and valuable switch options. This article values the incremental benefit of product switch options in steel plant projects. Such options is valued through Monte Carlo simulation and modeling the prices and demand of steel products as a Geometric Brownian Motion (GBM). Results show that this option can generate a significant increase in the NPV of metallurgical projects.

7 citations


Posted ContentDOI
TL;DR: Numerical experimentation shows that prior knowledge of the critical value of the substitution rate helps to minimize the total inventory cost in an inventory system of two mutually substitutable items.
Abstract: In this paper, we study an inventory system of two mutually substitutable items where when an item is out of stock, demand for it is met by the other item and any part of demand not met due to unavailability of the other item is lost. In the event of substitution, there is an additional cost of substitution involved for each unit of the substituted item. The demands are assumed to be deterministic and constant. Items are ordered jointly in each ordering cycle, in order to take advantage of joint replenishment. The problem is formulated and a solution procedure is suggested to determine the optimal ordering quantities that minimize the total inventory cost. The critical value of the substitution rate is defined to help in deciding the optimal value of decision parameters. Extensive numerical experimentation is carried out, which shows that prior knowledge of the critical value of the substitution rate helps to minimize the total inventory cost. Sensitivity analysis is carried out for the improvement in the optimal total cost with substitution as compared to the case without substitution to draw insights into the behaviour of the model.

7 citations


Posted ContentDOI
TL;DR: Three techniques are compared: namely, cost of poor quality (COPQ), Conditional Probability and Fuzzy TOPSIS for selecting the right project based on this specific firm to prove to be instructive for the realization of QIPs in similar types of industry.
Abstract: Continuous improvement is the core of any successful firm. Talking about manufacturing industries, there is huge potential for continuous improvement to be made in different work areas. Such improvement can be made in any section of industry in any form, such as quality improvement, waste minimization, system improvement, layout improvement, ergonomics, cost savings, etc. This case study considers an example of a manufacturing firm which wanted to start a quality improvement project (QIP) on its premises. Various products were available, but with dwindling quality levels. However, the real task was the choice of a product for upcoming QIP, as it is well known that success heavily depends upon the selection of a particular project. This is also because of the amount of effort in terms of time, money and manpower that is put into a project nowadays. In this paper, the authors’ objective was to compare three techniques: namely, cost of poor quality (COPQ), Conditional Probability and Fuzzy TOPSIS for selecting the right project based on this specific firm. The pros and cons of these approaches are also discussed. This study should prove to be instructive for the realization of QIPs in similar types of industry.

6 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new mixed attribute control chart adapted to a truncated life test, where the test duration is specified as a fraction of the mean lifespan, and the efficiency of the proposed chart is compared with an existing control chart in terms of the average run length.
Abstract: In this manuscript, the design of a new mixed attribute control chart adapted to a truncated life test is presented. It is assumed that the lifetime of a product follows the Weibull distribution and the number of failures is observed using a truncated life test, where the test duration is specified as a fraction of the mean lifespan. The proposed control chart consists of two pairs of control limits based on a binomial distribution and one lower bound. The average run length of the proposed chart is determined for various levels of shift constants and specified parameters. The efficiency of the proposed chart is compared with an existing control chart in terms of the average run length. The application of the proposed chart is discussed with the aid of a simulation study.

6 citations


Posted ContentDOI
TL;DR: In this paper, the authors propose a praxeological approach to improve a forecasting process through the employment of Forecast Value Added (FVA) analysis, which may be interpreted as a manifestation of lean management in forecasting.
Abstract: The goal of this paper is to propose a praxeological approach, in order to improve a forecasting process through the employment of Forecast Value Added (FVA) analysis. This may be interpreted as a manifestation of lean management in forecasting. The author discusses the concepts of the effectiveness and efficiency of forecasting. The former, defined in the praxeology as the degree to which goals are achieved, refers to the accuracy of forecasts. The latter reflects the relation between the benefits accruing from the results of forecasting and the costs incurred in this process. Since measuring the benefits accruing from a forecasting is very difficult, a simplification according to which this benefit is a function of the forecast accuracy is proposed. This enables evaluating the efficiency of the forecasting process. Since improving this process may consist of either reducing forecast error or decreasing costs, FVA analysis, which expresses the concept of lean management, may be applied to reduce the waste accompanying forecasting.

4 citations


Posted ContentDOI
TL;DR: The development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons and the proposed approach guarantees the applicability of such estimators for any size of set.
Abstract: This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing), which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set.

Posted ContentDOI
TL;DR: This paper proposes an alternative definition of the weak disposability of outputs in additive form and an axiomatic foundation is introduced to construct a new production technology space in the presence of undesirable outputs.
Abstract: In order to deal with undesirable products in models of performance analysis, we need to replace the assumption of free disposability by weak disposability and this assumption has been used to model undesirable products as outputs. The traditional axiom of weak disposability of Shephard (1970) is given in multiplier form and, in this sense, the level of bad outputs is equal to zero if and only if the level of the desirable outputs is equal to zero. In this paper, we will propose an alternative definition of the weak disposability of outputs in additive form. An axiomatic foundation is introduced to construct a new production technology space in the presence of undesirable outputs. The model is illustrated using real data from 92 coal fired power plants.

Posted ContentDOI
TL;DR: The paper develops a procedure to solve a UCCMOGP problem using an MOGP technique based on a weighted-sum method and shows that the corresponding uncertain chance-constrained multi-objective geometric programming problems can be transformed into conventional MogP problems to calculate the objective values.
Abstract: Multi-objective geometric programming (MOGP) is a powerful optimization technique widely used for solving a variety of nonlinear optimization problems and engineering problems. Generally, the parameters of a multi-objective geometric programming (MOGP) models are assumed to be deterministic and fixed. However, the values observed for the parameters in real-world MOGP problems are often imprecise and subject to fluctuations. Therefore, we use MOGP within an uncertainty based framework and propose a MOGP model whose coefficients are uncertain in nature. We assume the uncertain variables (UVs) to have linear, normal or zigzag uncertainty distributions and show that the corresponding uncertain chance-constrained multi-objective geometric programming (UCCMOGP) problems can be transformed into conventional MOGP problems to calculate the objective values. The paper develops a procedure to solve a UCCMOGP problem using an MOGP technique based on a weighted-sum method. The efficacy of this procedure is demonstrated by some numerical examples.

Posted ContentDOI
TL;DR: A new approach based on discrete time Markov Decision Processes (MDPs) is proposed that associates the modelling power of PNs with the planning power of MDPs, and simulation results illustrate the benefit of the method from the computational point of view.
Abstract: This paper considers design of control sequences for discrete event systems (DESs) modelled by untimed Petri nets (PNs). PNs are well-known mathematical and graphical models that are widely used to describe distributed DESs, including choices, synchronizations and parallelisms. The domains of application include, but are not restricted to, manufacturing systems, computer science and transportation networks. We are motivated by the observation that such systems need to plan their production or services. The paper is more particularly concerned with control issues in uncertain environments when unexpected events occur or when control errors disturb the behaviour of the system. To deal with such uncertainties, a new approach based on discrete time Markov Decision Processes (MDPs) is proposed that associates the modelling power of PNs with the planning power of MDPs. Finally, the simulation results illustrate the benefit of our method from the computational point of view

Posted ContentDOI
TL;DR: This paper investigates the antifragility level in an organization and estimates the Euclidean distance between the aggregation Fuzzy Antifragile Index (FAI) and each linguistic term used during this case study was calculated.
Abstract: the antifragility concept has received high attention from researchers in recent years. Contrary to fragile systems that fail when exposed to stressors, antifragile systems prosper and gets better in response to unpredictability, volatility, randomness, chaos and disturbance. The antifragility implication is beyond resilience or robustness. The resilient system resists stresses and remains the same; while the antifragile system improves and gets better. Taleb discusses that antifragility is required for dealing with events that he called them as black swans or X-Events which are scarce, unpredictable, and extreme events. These events come as surprise and have major consequences. Antifragile was developed by Taleb in the socioeconomic context, not in industrial production. But authors think that this concept may have its largest practical utilization and be very useful if it is applied to industrial environments. Thus, we had focused on this concept in our work. In this paper, we are aiming to investigate the antifragility level in an organization. In order to perform this, authors used a case study on Iranian Security Paper Manufacturing Complex (TAKAB). Firstly a questionnaire was designed according to 7 antifragility analytical criteria using five-point Likert scale and devoted a triangular fuzzy number to each Linguistic term. In the next phase, the weight of each criterion was obtained using entropy technique. In the final stage, the Euclidean distance between the aggregation Fuzzy Antifragility Index (FAI) and each linguistic term used during this case study was calculated. Eventually based on results, the antifragility level of the organization assessed as “satisfactorily antifragile", due to the minimum Euclidean distance.

Posted ContentDOI
TL;DR: This work proposes to apply Markovian queuing systems, specifically a model of a multi-channel queuing system with Poisson input flow and denial-of-service (breakdown) to solve the problem of determining the expected time before service is resumed after a failure.
Abstract: Cloud technologies are a very considerable area that influences IT infrastructure, network services and applications. Research has highlighted difficulties in the functioning of cloud infrastructure. For instance, if a server is subjected to malicious attacks or a force majeure causes a failure in the cloud’s service, it is required to determine the time that it takes the system to return to being fully functional after the crash. This will determine the technological and financial risks faced by the owner and end users of cloud services. Therefore, to solve the problem of determining the expected time before service is resumed after a failure, we propose to apply Markovian queuing systems, specifically a model of a multi-channel queuing system with Poisson input flow and denial-of-service (breakdown)

Posted ContentDOI
TL;DR: In this article, the authors focused on the key dimensions of decision-making and the role and participation of invariants of nature, logic, and conceptual system of science and management.
Abstract: The article was devoted to the key dimensions of decision-making. The main authors’ goal was pointing out on, an extremely important for decision-making management process, the role and participation of invariants of nature, logic, conceptual system of science and management. Most of these dimensions are associated with conditions of uncertainty, thus creating the risk of non- realization of the taken decision, or its negative side realization, what means a failure. In the course of this work the research hypothesis, directed at the fact, that the complexity of the decision category and management are determined by reality (Nature) varieties was verified. As a result of this hypothesis there is currently in Science no elaboration of a uniform methodology associated with decisions, just as science is not methodologically uniform. One can even doubt whether it is possible to create, for what in some sense, discussed in the article, the essence dimensions content of decisions undertaken by Man speaks. These problems are not a novelty for science, because they were analyzed by many scientists in the past , but this approach involves the process of decision-making in a wider then before spectrum of factors associated with it, and in this sense it is the novelty. This approach is something like interdisciplinary one. The authors of the article, relying on selected but generally relevant to the discussed issue of knowledge fields, presented the complexity and diversity of concepts making up the decision-making system and management, associated especially with ontology and epistemology. Therefore, the text refers broadly to participation in getting to know the reality of basic areas of human knowledge and the overlapping relationships between them. This applies to so-called circle of the sciences extracted (examined) by psychologist J. Piaget. It has been also emphasized in the article that logic is the basis of all thinking. Another area of the research was the role of logic and the game theory concepts in illustration of so-called good and bad decisions, that is, in the sense of optimization of decision-making processes. The aim of the authors (as additional one) was also to create the text, in which the piece of the contemporary human knowledge about the surrounding reality would be contained. In fact, the point is to understand the reality, that is, to be with it in relative equilibrium. The state of equilibrium is ensured by our knowledge, which can be well explained by logic, psychology and mathematics, which, in turn, express the knowledge in a strict way.

Posted ContentDOI
TL;DR: In this paper, a modification of Fourier analysis was applied to the estimation of cycle amplitudes and frequencies, which allowed for more precise estimation of the cycle characteristics than the traditional approach, and compared the international structure of Polish trade with EU members with the cross-spectral characteristics of GDP series.
Abstract: This paper examines the properties of business cycles in Poland and its major trading partners. The aim of the article is to study the business cycle synchronization (BCS) between Poland and other countries, and to assess the impact of international trade on BCS. The author applies a modification of Fourier analysis to the estimation of cycle amplitudes and frequencies. This allows for more precise estimation of the cycle characteristics than the traditional approach. Cross-spectral analysis of the cyclical components of GDP for Poland and its major trading partners enables us to study the relationships between business cycles in these countries. Comparing the international structure of Polish trade with EU members with the cross-spectral characteristics of GDP series allows us to investigate the links between international trade and business cycle synchronization.

Posted ContentDOI
TL;DR: Functionality of the developed prototype, in terms of the proposed approach to the multi-criteria analysis and assessment of objects, is illustrated on the practical example of the assessment of employees and the analysis of remuneration.
Abstract: The article presents the multi-methodical approach to the multi-criteria analysis and assessment of objects (rankings, grouping, econometric assessments). This issue is a section of research and engineering works associated with the construction and applications of the computerized decision support system. Functionality of the developed prototype, in terms of the proposed approach, is illustrated on the practical example of the assessment of employees and the analysis of remuneration.

Posted ContentDOI
TL;DR: This work characterize the optimal allocations and develops two exact algorithms for its search based on the tight relationship two geometric objects of fair division: the Individual Pieces Set (IPS) and the Radon-Nykodim Set (RNS).
Abstract: We consider the division of a finite number of homogeneous divisible items among three players. Under the assumption that each player assigns a positive value to every item, we develop a simple algorithm for its search. This is based on the tight relationship between two geometric objects of fair division: the Individual Pieces Set ($IPS$) and the Radon-Nykodim Set ($RNS$).

Journal ArticleDOI
TL;DR: The authors prove a PLT for the queue length (customers) in a multiserver open queueing network under heavy traffic conditions.
Abstract: Proofs of probability limit theorems (PLTs) have clear practical implications. In the article, the authors prove a PLT for the queue length (customers) in a multiserver open queueing network under heavy traffic conditions. Models of queueing networks have been extensively used for analysing the performance of manufacturing systems and transportation systems, as well as computer and communication networks. Therefore, many methods of approximation have emerged and PLT is among them. The history of investigations into diffusion approximations for queueing systems in heavy traffic is about forty years old, while the history of queueing networks is about twenty years old. Although Kolmogorov [29] had already proved in the fifties that it was possible to approximate the number of occupied phases in a queue with finite capacity by means of a diffusion process with reflection at the upper boundary, systematic study of the problem only started with the papers [26, 27, 35]. Similarly, methods of investigating single-phase (single-server) queueing systems in heavy traffic are consi-dered in [35, 21, 22, 3, 2], etc. Later on, a large number of papers were published aimed at various

Posted ContentDOI
TL;DR: In this paper, Zhang and Fan proposed a model for selecting project risk response strategies based on the model presented in this article and verified the verification of the proposed method conducted on a real project in electrical industry.
Abstract: Zhang and Fan (2014) proposed a model for selecting project risk response strategies. In this article we present our modifications of this model. The second chapter summarizes the weaknesses of the model presented in (Zhang and Fan, 2014) and contains a proposal of its improvement. In the third chapter a new model is presented. The fourth chapter contains the verification of the proposed method conducted on a real project in electrical industry. In the fifth chapter conclusions and further research possibilities are presented.

Posted ContentDOI
TL;DR: In this article, the authors present a model and quantitative measures aimed at improving the efficiency of a production-supply system described by a three-dimensional stochastic process, and the laws governing the functioning of the system are presented, corresponding to three different states of the stock level in subsystem M. These laws generate the presented quantitative model of the examined system, which enables the construction of the proposed quantitative measures supporting the managing process of such a system.
Abstract: This article presents the construction of a model and quantitative measures aimed at improving the efficiency of a production-supply system described by a three-dimensional stochastic process. For this purpose, the laws governing the functioning of the system are presented, corresponding to the three different states of the stock level in subsystem M. These laws generate the presented quantitative model of the examined system, which enables the construction of the proposed quantitative measures supporting the managing process of such a system.

Journal ArticleDOI
TL;DR: In this article, the preference coefficients for the population have been assigned using a weighted arithmetic mean, where the weights are the square roots of the sizes of the subpopulations and the statistical properties of these constants were presented in the context of decision making.
Abstract: The issue of decision-making has been examined based on the preferences of the entire population, when the preferences of a few subpopulations varying significantly in size are known. The purpose of assigning global preferences according to the coefficients proposed here was to avoid marginalising the preferences of the smaller subpopulations. The preference coefficients for the population have been assigned using a weighted arithmetic mean, where the weights are the square roots of the sizes of the subpopulations. This is similar to the voting system known as the “Jagiellonian compromise”. The statistical properties of these constants were presented in the context of decision making. These results have been illustrated by way of an example where the subpopulations exhibit significant differences, viz. students’ choice of an economics university in Lower Silesia, Poland.