scispace - formally typeset
Search or ask a question

Showing papers in "Iie Transactions in 2007"


Journal ArticleDOI
TL;DR: In this paper, the characteristics of large-scale emergencies and proposed a general facility location model that is suited for large scale emergencies, such as house fires and regular health care needs.
Abstract: Research on facility location is abundant. However, this research does not typically address the particular conditions that arise when locating facilities to service large-scale emergencies, such as earthquakes, terrorist attacks, etc. In this work we first survey general facility location problems and identify models used to address common emergency situations, such as house fires and regular health care needs. We then analyze the characteristics of large-scale emergencies and propose a general facility location model that is suited for large-scale emergencies. This general facility location model can be cast as a covering model, a P-median model or a P-center model, each suited for different needs in a large-scale emergency. Illustrative examples are given to show how the proposed model can be used to optimize the locations of facilities for medical supplies to address large-scale emergencies in the Los Angeles area. Furthermore, comparison of the solutions obtained by respectively using the proposed mo...

445 citations


Journal ArticleDOI
TL;DR: RSDP provides an efficient, systematic and simple approach for evaluating multistate network reliability given all d-MPs and is found that RSDP is more efficient than the existing algorithm when the number of components of a system is not too small.
Abstract: The multistate networks under consideration consist of a source node, a sink node, and some independent failure-prone components in between the nodes. The components can work at different levels of capacity. For such a network, we are interested in evaluating the probability that the flow from the source node to the sink node is equal to or greater than a demanded flow of d units. A general method for reliability evaluation of such multistate networks is using minimal path (cut) vectors. A minimal path vector to system state d is called a d-MP. Approaches for generating all d-MPs have been reported. Given that all d-MPs have been found, the issue becomes how to evaluate the probability of the union of the events that the component state vector is greater than or equal to at least one of the d-MPs. There is a need for a more efficient method of determining the probability of this union of events. In this paper, we report an efficient recursive algorithm for this union probability evaluation based on the Su...

278 citations


Journal ArticleDOI
TL;DR: This paper describes a linearized model for optimizing network interdiction that is similar to previous studies in the field, and compares it to a penalty model that does not require linearization constraints.
Abstract: We consider a network interdiction problem on a multicommodity flow network, in which an attacker disables a set of network arcs in order to minimize the maximum profit that can be obtained from shipping commodities across the network. The attacker is assumed to have some budget for destroying (or “interdicting”) arcs, and each arc is associated with a positive interdiction expense. In this paper, we examine problems in which interdiction must be discrete (i.e., each arc must either be left alone or completely destroyed), and in which interdiction can be continuous (the capacities of arcs may be partially reduced). For the discrete problem, we describe a linearized model for optimizing network interdiction that is similar to previous studies in the field, and compare it to a penalty model that does not require linearization constraints. For the continuous case, we prescribe an optimal partitioning algorithm along with a heuristic procedure for estimating the optimal objective function value. We demonstrat...

223 citations


Journal ArticleDOI
TL;DR: In this paper, a network transformation and demand specification approach for no-notice evacuation modeling is presented, which enables the conversion of a typical transportation planning network to an evacuation network configuration in which a hot zone, evacuation destinations, virtual super-safe node and connectors are established.
Abstract: This paper presents a network transformation and demand specification approach for no-notice evacuation modeling. The research is aimed at formulating the Joint Evacuation Destination–Route-Flow-Departure (JEDRFD) problem of a no-notice mass evacuation into a system optimal dynamic traffic assignment model. The proposed network transformation technique permits the conversion of a typical transportation planning network to an evacuation network configuration in which a hot zone, evacuation destinations, virtual super-safe node and connectors are established. Combined with a demand specification method, the JEDRFD problem is formulated as a single-destination cell-transmission-model-based linear programming model. The advantage of the proposed model compared with prior studies in the literature is that the multi-dimensional evacuation operation decisions are jointly obtained at the optimum of the JEDRFD model. The linear single-destination structure of the proposed model implies another advantage in computa...

220 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a model to determine the optimal product reliability, price and warranty strategy that achieve the biggest total integrated profit for a general repairable product sold under a free replac...
Abstract: The success of a new product depends on both engineering decisions (product reliability) and marketing decisions (price, warranty). A higher reliability results in a higher manufacturing cost and higher sale price. Consumers are willing to pay a higher price only if they can be assured about product reliability. Product warranty is one such tool to signal reliability with a longer warranty period indicating better reliability. Better warranty terms result in increased sales and also higher expected warranty servicing costs. Warranty costs are reduced by improvements in product reliability. Learning effects result in the unit manufacturing cost decreasing with total sales volume and this in turn impacts on the sale price. As such, reliability, price and warranty decisions need to be considered jointly. The paper develops a model to determine the optimal product reliability, price and warranty strategy that achieve the biggest total integrated profit for a general repairable product sold under a free replac...

217 citations


Journal ArticleDOI
TL;DR: Two stochastic network interdiction models for thwarting nuclear smuggling are described, including the important special case in which the sensors can only be installed at border crossings of a single country so that the resulting model is defined on a bipartite network.
Abstract: We describe two stochastic network interdiction models for thwarting nuclear smuggling. In the first model, the smuggler travels through a transportation network on a path that maximizes the probability of evading detection, and the interdictor installs radiation sensors to minimize that evasion probability. The problem is stochastic because the smuggler's origin-destination pair is known only through a probability distribution at the time when the sensors are installed. In this model, the smuggler knows the locations of all sensors and the interdictor and the smuggler “agree” on key network parameters, namely the probabilities the smuggler will be detected while traversing the arcs of the transportation network. Our second model differs in that the interdictor and smuggler can have differing perceptions of these network parameters. This model captures the case in which the smuggler is aware of only a subset of the sensor locations. For both models, we develop the important special case in which the senso...

209 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an innovative modeling and analysis framework to study the entire system of physical and economic infrastructures, including power, petroleum, natural gas, water, and communications.
Abstract: Modern society's physical health depends vitally upon a number of real, interdependent, critical infrastructure networks that deliver power, petroleum, natural gas, water, and communications. Its economic health depends on a number of other infrastructure networks, some virtual and some real, that link residences, industries, commercial sectors, and transportation sectors. The continued prosperity and national security of the US depends on our ability to understand the vulnerabilities of and analyze the performance of both the individual infrastructures and the entire interconnected system of infrastructures. Only then can we respond to potential disruptions in a timely and effective manner. Collaborative efforts among Sandia, other government agencies, private industry, and academia have resulted in realistic models for many of the individual component infrastructures. In this paper, we propose an innovative modeling and analysis framework to study the entire system of physical and economic infrastructur...

146 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate how a supplier can use a quantity discount schedule to influence the stocking decisions of a downstream buyer that faces a single period of stochastic demand.
Abstract: We investigate how a supplier can use a quantity discount schedule to influence the stocking decisions of a downstream buyer that faces a single period of stochastic demand. In contrast to much of the work that has been done on single-period supply contracts, we assume that there are no interactions between the supplier and the buyer after demand information is revealed and that the buyer has better information about the distribution of demand than does the supplier. We characterize the structure of the optimal discount schedule for both all-unit and incremental discounts and show that the supplier can earn larger profits with an all-unit discount.

141 citations


Journal ArticleDOI
TL;DR: A Lagrangian heuristic is developed to obtain near-optimal solutions with reasonable computational requirements for large problem instances of a two-stage supply chain that replenishes a single product at retailers.
Abstract: Most existing network design and facility location models have focused on the trade-off between the fixed costs of locating facilities and variable transportation costs between facilities and customers. However, operational performance measures such as service levels and lead times are what motivates customers to bring business to a company and should be considered in the design of a distribution network. While some previous work has considered lead times and safety stocks separately, they are closely related in practice, since safety stocks are often set relative to the distribution of demand over the lead time. In this paper we consider a two-stage supply chain with a production facility that replenishes a single product at retailers. The objective is to locate Distribution Centers (DCs) in the network such that the sum of the location and inventory (pipeline and safety stock) costs is minimized. The replenishment lead time at the DCs depends on the volume of flow through the DC. We require the DCs to c...

111 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the incentives for manufacturers and retailers within a supply chain to distort information when they share it and propose a mechanism that results in truthful information sharing in a make-to-order supply chain consisting of a single manufacturer and a single retailer.
Abstract: The existing literature on supply chain information sharing assumes that information is shared truthfully. Unless each party can verify the authenticity of the other party's information, manufacturers and retailers may divulge false information for their own benefit. These information distortions may reduce the benefit levels or even stop information sharing in supply chains. We analyze the incentives for manufacturers and retailers within a supply chain to distort information when they share it and propose a mechanism that results in truthful information sharing. We consider a make-to-order supply chain consisting of a single manufacturer and a single retailer. The manufacturer and the retailer set prices based on their private forecasts of uncertain demand. If both parties share their forecasts truthfully, the manufacturer always benefits; however, the retailer benefits only if the manufacturer sets a lower wholesale price when information is shared compared to when information is not shared. However, w...

109 citations


Journal ArticleDOI
TL;DR: In this paper, a genetic algorithm is proposed to find the optimal layout of the MIP-FLP and every sequence-pair solution is position consistent (although possibly not layout feasible).
Abstract: In Facility Layout Problem (FLP) research, the continuous-representation-based FLP can consider all feasible all-rectangular-department solutions. Given this flexibility, this representation has become the representation of choice in FLP research. Much of this research is based on a methodology of Mixed-Integer Programming (MIP) models. However, these MIP-FLP models can only solve problems with a limited number of departments to optimality due to the large number of combinations of the binary variables used in the models to maintain feasibility with respect to departments overlapping. Our research centers around the sequence-pair representation, a concept that originated in the Very Large Scale Integration (VLSI) design literature. We show that an exhaustive search of the sequence-pair solution space will result in finding the optimal layout of the MIP-FLP and that every sequence-pair solution is position consistent (although possibly not layout feasible) in the MIP-FLP. We propose a genetic-algorithm-bas...

Journal ArticleDOI
TL;DR: A causal modeling approach is proposed to improve an existing causal discovery algorithm by integrating manufacturing domain knowledge with the algorithm and the approach is demonstrated by discovering the causal relationships among the product quality and process variables in a rolling process.
Abstract: This paper investigates learning causal relationships from the extensive datasets that are becoming increasingly available in manufacturing systems. A causal modeling approach is proposed to improve an existing causal discovery algorithm by integrating manufacturing domain knowledge with the algorithm. The approach is demonstrated by discovering the causal relationships among the product quality and process variables in a rolling process. When allied with engineering interpretations, the results can be used to facilitate rolling process control.

Journal ArticleDOI
TL;DR: This study shows that H-mine has an excellent performance for various kinds of data, outperforms currently available algorithms in different settings, and is highly scalable to mining large databases.
Abstract: In this study, we propose a simple and novel data structure using hyper-links, H-struct, and a new mining algorithm, H-mine, which takes advantage of this data structure and dynamically adjusts links in the mining process. A distinct feature of this method is that it has a very limited and precisely predictable main memory cost and runs very quickly in memory-based settings. Moreover, it can be scaled up to very large databases using database partitioning. When the data set becomes dense, (conditional) FP-trees can be constructed dynamically as part of the mining process. Our study shows that H-mine has an excellent performance for various kinds of data, outperforms currently available algorithms in different settings, and is highly scalable to mining large databases. This study also proposes a new data mining methodology, space-preserving mining, which may have a major impact on the future development of efficient and scalable data mining methods. †Decreased

Journal ArticleDOI
TL;DR: In this article, the authors consider ambulance allocation and reallocation models for a post-disaster relief operation and present two iterative procedures to optimize the makespan and the weighted total flow time, respectively.
Abstract: In this paper, we consider ambulance allocation and reallocation models for a post-disaster relief operation. The initial focus is on allocating the correct number of ambulances to each cluster at the beginning of the rescue process. We formulate a deterministic model which depicts how a cluster grows after a disaster strikes. Based on the model and given a number of ambulances, we develop methods to calculate critical time measures, e.g., the completion time for each cluster. Then we present two iterative procedures to optimize the makespan and the weighted total flow time, respectively. The second problem analyzes the ambulance reallocation problem on the basis of a discrete time policy. The benefits of redistribution include providing service to new clusters and fully utilizing ambulances. We consider the objective of minimizing the makespan. A complication is that the distance between clusters needs to be factored in when making an ambulance reallocation decision. Our model permits consideration of th...

Journal ArticleDOI
TL;DR: This paper presents a systematic methodology to construct a statistical prediction model for failure event based on event sequence data that can help proactively diagnose machine faults with a sufficient lead time before actual system failures to allow preventive maintenance to be scheduled thereby reducing the downtime costs.
Abstract: The analysis of event sequence data that contains system failures is becoming increasingly important in the design of service and maintenance policies. This paper presents a systematic methodology to construct a statistical prediction model for failure event based on event sequence data. First, frequent failure signatures, defined as a group of events/errors that repeatedly occur together, are identified automatically from the event sequence by use of an efficient algorithm. Then, the Cox proportional hazard model, that is extensively used in biomedical survival analysis, is used to provide a statistically rigorous prediction of system failures based on the time-to-failure data extracted from the event sequences. The identified failure signatures are used to select significant covariates for the Cox model, i.e., only the events and/or event combinations in the signatures are treated as explanatory variables in the Cox model fitting. By combining the failure signature and Cox model approaches the proposed ...

Journal ArticleDOI
TL;DR: In this article, the authors consider an extension of the single-period inventory model with stochastic demand where a put option can be purchased to reduce losses resulting from low demand and show that the same order quantity maximizes the expected profit with or without the option.
Abstract: In this paper we consider an extension of the single-period inventory model with stochastic demand where a put option can be purchased to reduce losses resulting from low demand. The newsvendor not only chooses the order quantity but also determines the “strike price” and/or the “strike quantity” of the put option. As the buyer of the put option, the newsvendor pays the option writer an amount that equals the expected option payoff plus a risk premium and receives from the option writer the strike price (adjusted for salvage value) for each unit that the demand falls below the strike quantity. The newsvendor is risk-averse and attempts to maximize an expected utility function. We show that: (i) the same order quantity maximizes the expected profit with or without the option; and (ii) the strike price and strike quantity do not affect the newsvendor's maximum expected profit but they do affect the variance of the profit. We use concepts from stochastic dominance theory to prove the following result: if the...

Journal ArticleDOI
TL;DR: In this paper, the authors formally establish that reinforcement learning, currently one of the most actively researched paradigms in the area of machine learning, constitutes a rigorous, efficient, and effectively implementable modeling framework for providing (near-)optimal solutions to the optimal disassembly problem, in the face of the aforementioned uncertainties.
Abstract: Currently there is increasing consensus that one of the main issues differentiating remanufacturing from more traditional manufacturing processes is the need to effectively model and manage the high levels of uncertainty inherent in these new processes. Hence, the work presented in this paper concerns the issue of uncertainty modeling and management as it arises in the context of the optimal disassembly planning problem, one of the key problems to be addressed by remanufacturing processes. More specifically, the presented results formally establish that the theory of reinforcement learning, currently one of the most actively researched paradigms in the area of machine learning, constitutes a rigorous, efficient, and effectively implementable modeling framework for providing (near-)optimal solutions to the optimal disassembly problem, in the face of the aforementioned uncertainties. In addition, the proposed approach is exemplified and elucidated by application on a case study borrowed from the relevant li...

Journal ArticleDOI
TL;DR: There is no difference between MLP networks and SVR techniques when the authors compare their mean square error values, and proposed heuristic models produce better results than the studied data mining methods.
Abstract: Technical indicators are used with two heuristic models, kernel principal component analysis and factor analysis in order to identify the most influential inputs for a forecasting model. Multilayer perceptron (MLP) networks and support vector regression (SVR) are used with different inputs. We assume that the future value of a stock price/return depends on the financial indicators although there is no parametric model to explain this relationship, which comes from the technical analysis. Comparison studies show that SVR and MLP networks require different inputs. Furthermore, proposed heuristic models produce better results than the studied data mining methods. In addition to this, we can say that there is no difference between MLP networks and SVR techniques when we compare their mean square error values.

Journal ArticleDOI
TL;DR: An optimization model is introduced that explicitly captures the interdependency between network design and inventory stocking decisions and shows that the integrated approach can provide significant cost savings over the decoupled approach, shifting the whole efficient frontier curve between cost and service level to superior regions.
Abstract: We study the integrated logistics network design and inventory stocking problem as characterized by the interdependency of the design and stocking decisions in service parts logistics. These two sets of decisions are usually considered sequentially in practice, and the associated problems are tackled separately in the research literature. The overall problem is typically further complicated due to time-based service constraints that provide lower limits on the percentage of demand satisfied within specified time windows. We introduce an optimization model that explicitly captures the interdependency between network design (location of facilities, and allocation of demands to facilities) and inventory stocking decisions (stock levels and their corresponding stochastic fill rates), and present computational results from our extensive experiments that investigate the effects of several factors including demand levels, time-based service levels and costs. We show that the integrated approach can provide signi...

Journal ArticleDOI
TL;DR: The performance of DFTC compared favorably with that of other distribution-free procedures in stationary test processes having various types of autocorrelation functions as well as normal or nonnormal marginals.
Abstract: A distribution-free tabular CUSUM chart called DFTC is designed to detect shifts in the mean of an autocorrelated process. The chart's Average Run Length (ARL) is approximated by generalizing Siegmund's ARL approximation for the conventional tabular CUSUM chart based on independent and identically distributed normal observations. Control limits for DFTC are computed from the generalized ARL approximation. Also discussed are the choice of reference value and the use of batch means to handle highly correlated processes. The performance of DFTC compared favorably with that of other distribution-free procedures in stationary test processes having various types of autocorrelation functions as well as normal or nonnormal marginals.

Journal ArticleDOI
TL;DR: In this paper, the authors define a set of set operations and define the probability space and axioms for each of them. But they do not define the number of variables to be generated by each set operation.
Abstract: Preface. Chapter 1: Sets, Fields, and Events. 1.1 Set Definitions. 1.2 Set Operations. 1.3 Set Algebras, Fields, and Events. Chapter 2: Probability Space and Axioms. 2.1 Probability Space. 2.2 Conditional Probability. 2.3 Independence. 2.4 Total Probability and Bayes' Theorem. Chapter 3: Basic Combinatorics. 3.1 Basic Counting Principles. 3.2 Permutations. 3.3 Combinations. Chapter 4: Discrete Distributions. 4.1 Bernoulli Trials. 4.2 Binomial Distribution. 4.3 Multinomial Distribution. 4.4 Geometric Distribution. 4.5 Negative Binomial Distribution. 4.6 Hypergeometric Distribution. 4.7 Poisson Distribution. 4.8 Logarithmic Distribution. 4.9 Summary of Discrete Distributions. Chapter 5: Random Variables. 5.1 Definition of Random Variables. 5.2 Determination of Distribution and Density Functions. 5.3 Properties of Distribution and Density Functions. 5.4 Distribution Functions from Density Functions. Chapter 6: Continuous Random Variables and Basic Distributions. 6.1 Introduction. 6.2 Uniform Distribution. 6.3 Exponential Distribution. 6.4 Normal or Gaussian Distribution. Chapter 7: Other Continuous Distributions. 7.1 Introduction. 7.2 Triangular Distribution. 7.3 Laplace Distribution. 7.4 Erlang Distribution. 7.5 Gamma Distribution. 7.6 Weibull Distribution. 7.7 Chi-Square Distribution. 7.8 Chi and Other Allied Distributions. 7.9 Student-t Density. 7.10 Snedecor F Distribution. 7.11 Lognormal Distribution. 7.12 Beta Distribution. 7.13 Cauchy Distribution. 7.14 Pareto Distribution. 7.15 Gibbs Distribution. 7.16 Mixed Distributions. 7.17 Summary of Distributions of Continuous Random Variables. Chapter 8: Conditional Densities and Distributions. 8.1 Conditional Distribution and Density for P(A) = 0. 8.2 Conditional Distribution and Density for P(A) 0. 8.3 Total Probability and Bayes' Theorem for Densities. Chapter 9: Joint Densities and Distributions. 9.1 Joint Discrete Distribution Functions. 9.2 Joint Continuous Distribution Functions 9.3 Bivariate Gaussian Distributions. Chapter 10: Moments and Conditional Moments. 10.1 Expectations. 10.2 Variance. 10.3 Means and Variances of Some Distributions. 10.4 Higher-Order Moments. 10.5 Bivariate Gaussian. Chapter 11: Characteristic Functions and Generating Functions. 11.1 Characteristic Functions. 11.2 Examples of Characteristic Functions. 11.3 Generating Functions. 11.4 Examples of Generating Functions. 11.5 Moment Generating Functions. 11.6 Cumulant Generating Functions. 11.7 Table of Means and Variances. Chapter 12: Functions of a Single Random Variable. 12.1 Random Variable g(X). 12.2 Distribution of Y = g(X ). 12.3 Direct Determination of Density fY (y) from fX(x). 12.4 Inverse Problem: Finding g(x) Given fX(x) and fY (y). 12.5 Moments of a Function of a Random Variable. Chapter 13: Functions of Multiple Random Variables. 13.1 Function of Two Random Variables, Z = g(X,Y ). 13.2 Two Functions of Two Random Variables, Z = g(X,Y ), W = h(X,Y ). 13.3 Direct Determination of Joint Density fZW(z,w ) from fXY(x,y). 13.4 Solving Z = g(X,Y ) Using an Auxiliary Random Variable. 13.5 Multiple Functions of Random Variables. Chapter 14: Inequalities, Convergences, and Limit Theorems. 14.1 Degenerate Random Variables. 14.2 Chebyshev and Allied Inequalities. 14.3 Markov Inequality. 14.4 Chernoff Bound. 14.5 Cauchy-Schwartz Inequality. 14.6 Jensen's Inequality. 14.7 Convergence Concepts. 14.8 Limit Theorems. Chapter 15: Computer Methods for Generating Random Variates. 15.1 Uniform-Distribution Random Variates. 15.2 Histograms. 15.3 Inverse Transformation Techniques. 15.4 Convolution Techniques. 15.5 Acceptance-Rejection Techniques. Chapter 16: Elements of Matrix Algebra. 16.1 Basic Theory of Matrices. 16.2 Eigenvalues and Eigenvectors of Matrices. 16.3 Vectors and Matrix Differentiations. 16.4 Block Matrices. Chapter 17: Random Vectors and Mean-Square Estimation. 17.1 Distributions and Densities. 17.2 Moments of Random Vectors. 17.3 Vector Gaussian Random Variables. 17.4 Diagonalization of Covariance Matrices. 17.5 Simultaneous Diagonalization of Covariance Matrices. 17.6 Linear Estimation of Vector Variables. Chapter 18: Estimation Theory. 18.1 Criteria of Estimators. 18.2 Estimation of Random Variables. 18.3 Estimation of Parameters (Point Estimation). 18.4 Interval Estimation (Confidence Intervals). 18.5 Hypothesis Testing (Binary). 18.6 Bayesian Estimation. Chapter 19: Random Processes. 19.1 Basic Definitions. 19.2 Stationary Random Processes. 19.3 Ergodic Processes. 19.4 Estimation of Parameters of Random Processes. 19.5 Power Spectral Density. Chapter 20: Classification of Random Processes. 20.1 Specifications of Random Processes. 20.2 Poisson Process. 20.3 Binomial Process. 20.4 Independent Increment Process. 20.5 Random-Walk Process. 20.6 Gaussian Process. 20.7 Wiener Process (Brownian Motion). 20.8 Markov Process. 20.9 Markov Chain. 20.10 Martingale Process. 20.11 Periodic Random Process. 20.12 Aperiodic Random Process (Karhunen-Loeve Expansion). Chapter 21: Random Processes and Linear Systems. 21.1 Review of Linear Systems. 21.2 Random Processes through Linear Systems. 21.3 Linear Filters. 21.4 Bandpass Stationary Random Processes. Chapter 22: Weiner and Kalman Filters. 22.1 Review of Orthogonality Principle. 22.2 Wiener Filtering. 22.3 Discrete Kalman Filter. 22.4 Continuous Kalman Filter. Chapter 23: Probabilistic Methods in Transmission Tomography. 23.1 Introduction. 23.2 Stochastic Model. 23.3 Stochastic Estimation Algorithm. 23.4 Prior Distribution P(M). 23.5 Computer Simulation. 23.6 Results and Conclusions. 23.7 Discussion of Results. 23.8 References for Chapter 23. APPENDIXES. A: A Fourier Transform Tables. B: Cumulative Gaussian Tables. C: Inverse Cumulative Gaussian Tables. D: Inverse Chi-Square Tables. E: Inverse Student-t Tables. F: Cumulative Poisson Distribution. G: Cumulative Binomial Distribution. References. Index.

Journal ArticleDOI
TL;DR: The method integrates max-min linear programming, hydraulic simulation, and genetic algorithms for constraint generation to find a security allocation that maximizes an attacker's marginal cost of inflicting damage through the destruction of network components.
Abstract: This paper develops a method for allocating a security budget to a water supply network so as to maximize the network's resilience to physical attack. The method integrates max-min linear programming, hydraulic simulation, and genetic algorithms for constraint generation. The objective is to find a security allocation that maximizes an attacker's marginal cost of inflicting damage through the destruction of network components. We illustrate the method on two example networks, one large and one small, and investigate its allocation effectiveness and computational characteristics.

Journal ArticleDOI
TL;DR: In this paper, a partially observable, discrete-time Markov decision process is proposed to obtain the near-optimal combined preventive maintenance/statistical process control policy that minimizes the costs associated with maintenance, sampling, and poor quality.
Abstract: The economic design of control charts and the optimization of preventive maintenance policies have separately received a tremendous amount of attention in the quality and reliability literature over the years in an attempt to reduce the costs associated with operating manufacturing processes. Not until recently has the proposal been made to integrate these two fields and utilize the relationship between quality and equipment performance to improve the productivity of a manufacturing process. In this paper, we extend the initial preliminary investigation of this idea of using an chart in conjunction with an age-replacement preventive maintenance policy. We formulate a partially observable, discrete-time Markov decision process in order to obtain the near-optimal combined preventive maintenance/statistical process control policy that minimizes the costs associated with maintenance, sampling, and poor quality. We develop transition probabilities for the various states of the infinite horizon problem and a so...

Journal ArticleDOI
TL;DR: An integrated scheduling and distribution model in which jobs completed by two different machines must be bundled together for delivery is considered to minimize the sum of the delivery cost and customers' waiting costs.
Abstract: We consider an integrated scheduling and distribution model in which jobs completed by two different machines must be bundled together for delivery. The objective is to minimize the sum of the delivery cost and customers' waiting costs. Such a model not only attempts to coordinate the job schedules on both machines, but also aims to coordinate the machine schedules with the delivery plan. Polynomial-time heuristics and approximation schemes are developed for the model with only direct shipments as well as the general model with milk-run deliveries.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce procedures that are capable of selecting the best alternative in these situations and provide the desired statistical guarantees, but they assume that the set of alternatives is available at the beginning of the experiment and in many situations, the alternatives are revealed (generated) sequentially during the experiment.
Abstract: Statistical Ranking and Selection (R&S) is a collection of experiment design and analysis techniques for selecting the system with the largest or smallest mean performance from among a finite set of alternatives. R&S procedures have received considerable research attention in the stochastic simulation community, and they have been incorporated in commercial simulation software. All existing procedures assume that the set of alternatives is available at the beginning of the experiment. In many situations, however, the alternatives are revealed (generated) sequentially during the experiment. We introduce procedures that are capable of selecting the best alternative in these situations and provide the desired statistical guarantees.

Journal ArticleDOI
TL;DR: A multivariate control region can be considered to be a pattern that represents the normal operating conditions of a process and Reference data can be generated and used to learn the difference between this region and random noise.
Abstract: A multivariate control region can be considered to be a pattern that represents the normal operating conditions of a process. Reference data can then be generated and used to learn the difference between this region and random noise. Then multivariate statistical process control can be converted to a supervised learning task. This can dramatically reshape the control region and open the control problem to a rich collection of supervised learning tools. Such tools provide generalization error estimates that can be used to specify error rates. The effectiveness of such an approach is shown here. Such a computational approach is now easily accomplished with modern computing resources. Examples use random forests and a regularized least squares classifier as the learners.

Journal ArticleDOI
TL;DR: The algorithm presented in this paper is based on a predecessor matrix and an element substitution technique that allows for the exact computation of minimal cut sets and the immediate inclusion of node failure without any changes to the pseudo-code.
Abstract: The computation of the reliability of two-terminal networks is a classical reliability problem. For these types of problems, one is interested, from a general perspective, in obtaining the probability that two specific nodes can communicate. This paper presents a holistic algorithm for the analysis of general networks that follow a two-terminal rationale. The algorithm is based on a set replacement approach and an element inheritance strategy that effectively obtains the minimal cut sets associated with a given network. The vast majority of methods available for obtaining two-terminal reliability are generally based on assumptions about the performance of the network. Some methods assume network components can be in one of two states: (i) either completely failed; or (ii) perfectly functioning, others usually assume that nodes are perfectly reliable and thus, these methods have to be complemented or transformed to account for node failure, and the remaining methods assume minimal cut sets can be readily c...

Journal ArticleDOI
TL;DR: A tree-structured method that fits a simple but nontrivial model to each partition of the variable space that ensures that each piece of the fitted regression function can be visualized with a graph or a contour plot.
Abstract: Many methods can fit models with a higher prediction accuracy, on average, than the least squares linear regression technique. But the models, including linear regression, are typically impossible to interpret or visualize. We describe a tree-structured method that fits a simple but nontrivial model to each partition of the variable space. This ensures that each piece of the fitted regression function can be visualized with a graph or a contour plot. For maximum interpretability, our models are constructed with negligible variable selection bias and the tree structures are much more compact than piecewise-constant regression trees. We demonstrate, by means of a large empirical study involving 27 methods, that the average prediction accuracy of our models is almost as high as that of the most accurate “black-box” methods from the statistics and machine learning literature.

Journal ArticleDOI
TL;DR: The proposed SPC system consists of efficient and robust profiling methods to accommodate different behavior patterns including business changes, structural breakdowns, and unnecessary errors and will allow business managers and engineers to establish successful customer loyalty programs for churn prevention and fraud detection.
Abstract: Statistical Process Control (SPC) techniques have been successfully used in manufacturing industries to trigger and identify the root cause of variations so as to promote quality improvement. This paper develops a SPC framework to identify important changes deserved in business activity monitoring. To model and track thousands of diversified customer behaviors, the proposed SPC system consists of efficient and robust profiling methods to accommodate different behavior patterns including business changes, structural breakdowns, and unnecessary errors. Several customer profiling techniques are discussed and the activity monitoring performance based on the profiling algorithms is compared in a simulation example and a customer churn detection example in a telecommunications setting. The enhanced system will allow business managers and engineers to establish successful customer loyalty programs for churn prevention and fraud detection.

Journal ArticleDOI
TL;DR: In this paper, a new performance measure for statistical process control chart design is proposed to take into consideration variable shift sizes and corresponding quality impacts, and the proposed design methodology does not involve any cost estimation and the design procedure is as simple as looking up tables.
Abstract: Statistical process control charts are important tools for detecting process shifts. To ensure accurate, responsive fault detection, control chart design is critical. In the literature, control charts are typically designed by minimizing the control chart's responding time, i.e., average run length (ARL), to an anticipated shift size under a tolerable false alarm rate. However, process shifts, originating from various variation sources, often come with different sizes and result in different degrees of quality impacts. In this paper, we propose a new performance measure for EWMA and CUSUM control chart design to take into consideration the variable shift sizes and corresponding quality impacts. Unlike economic designs of control charts that suffer from a complex cost structure and intensive numerical computation, the proposed design methodology does not involve any cost estimation and the design procedure is as simple as looking up tables. Given the Gaussian random shifts and quadratic quality loss functi...