scispace - formally typeset
Search or ask a question

Showing papers in "Iie Transactions in 2005"


Journal ArticleDOI
TL;DR: Bayesian updating methods that use real-time condition monitoring information to update the stochastic parameters of exponential degradation models are developed and used to develop a closed-form residual-life distribution for the monitored device.
Abstract: Real-time condition monitoring is becoming an important tool in maintenance decision-making. Condition monitoring is the process of collecting real-time sensor information from a functioning device in order to reason about the health of the device. To make effective use of condition information, it is useful to characterize a device degradation signal, a quantity computed from condition information that captures the current state of the device and provides information on how that condition is likely to evolve in the future. If properly modeled, the degradation signal can be used to compute a residual-life distribution for the device being monitored, which can then be used in decision models. In this work, we develop Bayesian updating methods that use real-time condition monitoring information to update the stochastic parameters of exponential degradation models. We use these degradation models to develop a closed-form residual-life distribution for the monitored device. Finally, we apply these degradation...

691 citations


Journal ArticleDOI
TL;DR: Analytical and simulation modelling demonstrate that even a small rate of stock loss undetected by the information system can lead to inventory inaccuracy that disrupts the replenishment process and creates severe out-of-stock situations.
Abstract: Many companies have automated their inventory management processes and now rely on information systems when making critical decisions. However, if the information is inaccurate, the ability of the system to provide a high availability of products at the minimal operating cost can be compromised. In this paper, analytical and simulation modelling demonstrate that even a small rate of stock loss undetected by the information system can lead to inventory inaccuracy that disrupts the replenishment process and creates severe out-of-stock situations. In fact, revenue losses due to out-of-stock situations can far outweigh the stock losses themselves. This sensitivity of the performance to the inventory inaccuracy becomes even greater in systems operating in lean environments. Motivated by an automatic product identification technology under development at the Auto-ID Center at MIT, various methods of compensating for the inventory inaccuracy are presented and evaluated. Comparisons of the methods reveal that the...

328 citations


Journal ArticleDOI
TL;DR: This paper addresses the computational complexity of the batching of orders in a parallel-aisle warehouse and develops a branch-and-price optimization algorithm for the problem, model the problem as a generalized set partitioning problem and present a column generation algorithm to solve its linear programming relaxation.
Abstract: Although the picking of items may make up as much as 60% of all labor activities in a warehouse and may account for as much as 65% of all operating expenses, many order picking problems are still not well understood. Indeed, usually simple rules of thumb or straightforward constructive heuristics are used in practice, even in state-of-the-art warehouse management systems, however, it might well be that more attractive algorithmic alternatives could be developed. We address one such a fundamental materials handling problem: the batching of orders in a parallel-aisle warehouse so as to minimize the total traveling time needed to pick all items. Many heuristics have been proposed for this problem, however, a fundamental analysis of the problem is still lacking. In this paper, we first address the computational complexity of the problem. We prove that this problem is NP-hard in the strong sense but that it is solvable in polynomial time if no batch contains more than two orders. This result is not really surp...

285 citations


Journal ArticleDOI
TL;DR: In this article, a fuzzy analytical hierarchy process (AHP) is used to reduce a set of conceptual design alternatives by eliminating those whose scores (or weights) are sigmoid.
Abstract: The evaluation process of conceptual design alternatives in a new product development environment is a critical point for companies who operate in fast-growing markets. Various methods exist that are able to successfully carry out this difficult and time-consuming process. One of these methods, the Analytic Hierarchy Process (AHP) has been widely used to solve multiple-criteria decision-making problems (i.e., concept evaluation, equipment selection) in both academic research and in industrial practice. However, due to vagueness and uncertainty in the decision-maker's judgment, a crisp, pair-wise comparison with a conventional AHP may be unable to accurately capture the decision-maker's judgment. Therefore, fuzzy logic is introduced into the pair-wise comparison in the AHP to compensate for this deficiency in the conventional AHP. This is referred to as fuzzy AHP. In this paper, a fuzzy AHP method is used to reduce a set of conceptual design alternatives by eliminating those whose scores (or weights) are s...

210 citations


Journal ArticleDOI
TL;DR: In this article, a degradation-based procedure for estimating the full and residual lifetime distribution of a single unit system subject to Markovian deterioration is presented, which unifies real degradation measures with analytical, stochastic failure models to numerically compute the distributions and their moments.
Abstract: This paper presents a degradation-based procedure for estimating the full and residual lifetime distribution of a single-unit system subject to Markovian deterioration. The hybrid approach unites real degradation measures with analytical, stochastic failure models to numerically compute the distributions and their moments. Two distinct models are shown to perform well when compared with simulated data. Moreover, results obtained from the second model are compared with empirically generated lifetimes of 2024-T3 aluminum alloy specimens. The numerical experiments indicate that the proposed techniques are useful for remaining lifetime prognosis in both cases.

172 citations


Journal ArticleDOI
Tamer Boyaci1
TL;DR: In this paper, the authors consider a multiple-channel distribution system in which a manufacturer sells its product through an independent retailer as well as through his wholly-owned channel and explore the channel inefficiencies induced by the presence of simultaneous vertical competition (double-marginalization) and horizontal competition (substitutability).
Abstract: In this paper, we study a multiple-channel distribution system in which a manufacturer sells its product through an independent retailer as well as through his wholly-owned channel. The manufacturer and the retailer stock the product solely to satisfy the final customer demand of their respective channels. We focus on the stocking levels of the manufacturer's wholly-owned channel and the retail channel. We assume that each channel has a local stochastic demand, but that the products are substitutable, which means there will be spill-over customers in the event that one channel runs out of stock. We explore the channel inefficiencies induced by the presence of simultaneous vertical competition (double-marginalization) and horizontal competition (substitutability). We show that there is an overall tendency for both channels to overstock due to substitution, which intensifies under increasing substitution rates. Increasing double-marginalization, on the other hand, intensifies the tendency to overstock in th...

167 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the customer and engineering interaction in product portfolio planning, which is becoming increasingly important, manifested by the efforts observed in many industries to improve the coordination of marketing, design and manufacturing activities across product and process segments.
Abstract: A critical decision that faces companies across all industrial sectors is the selection of an optimal mix of product attributes to offer in the marketplace, namely product portfolio planning. The product portfolio planning problem has to date generally been considered from a marketing perspective, with the focus being on customer concerns i.e., how alternative sets of product attributes and attribute-level options interact and compete among the target customer segments. From the engineering perspective, the operational implications of product portfolio decisions have been tackled with a primary concern about the cost and complexity of interactions among multiple products in a manufacturing environment with increasing variety. Consideration of the customer and engineering interaction in product portfolio planning is becoming increasingly important, manifested by the efforts observed in many industries to improve the coordination of marketing, design and manufacturing activities across product and process p...

166 citations


Journal ArticleDOI
TL;DR: This study lays the foundation for the development of a simulation tool which is general, flexible, intuitive, simple to use and contains default values for most of the system's parameters.
Abstract: In recent years, hospitals have been vigorously searching for ways to reduce costs and improve productivity. One tool, simulation, is now widely accepted as an effective method to assist management in evaluating different operational alternatives. It can help improve existing Emergency Departments (EDs) and assist in planning and designing new EDs. In order to increase the acceptance of simulation in healthcare systems in general and EDs in particular, hospital management should be directly involved in the development of these projects. Such involvement will also bolster the simulation's credibility. In addition, it is important to simplify simulation processes as much as is reasonably possible and use visual aids or animation that will heighten users' confidence in the model's ability. This study lays the foundation for the development of a simulation tool which is general, flexible, intuitive, simple to use and contains default values for most of the system's parameters.

138 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered a multi-commodity supply chain design problem in which they need to determine where to locate facilities and how to allocate customers to facilities so as to minimize total costs.
Abstract: We consider a multi-commodity supply chain design problem in which we need to determine where to locate facilities and how to allocate customers to facilities so as to minimize total costs. The cost associated with each facility exhibits economies of scale. We show that this problem can be formulated as a nonlinear integer program and propose a Lagrangian-relaxation solution algorithm. By exploiting the structure of the problem, we find a low-order polynomial algorithm for the nonlinear integer program that must be solved in solving the Lagrangian relaxation subproblems. We also compare our approach with an existing algorithm. Contributed by the Location and Transportation Modeling Department

131 citations


Journal ArticleDOI
TL;DR: In this article, a methodology for reactively scheduling nurses in light of shift-by-shift imbalances in supply and demand is presented, where the problem associated with making the daily adjustments is formulated as an Integer Program (IP) and solved within a rolling horizon framework.
Abstract: This paper presents a new methodology for reactively scheduling nurses in light of shift-by-shift imbalances in supply and demand. In most hospitals, the nursing staff is given a midterm schedule that specifies their work assignments for up to 6 weeks at a time. However, emergencies, call-outs, and normal fluctuations in personnel requirements can play havoc with the schedule. As a result, it is necessary to make short-term adjustments, either by reallocating resources when shortages exist or by cancelling assignments when demand drops. The need to take into account individual preferences further complicates the process. The problem associated with making the daily adjustments is formulated as an Integer Program (IP) and solved within a rolling horizon framework. The idea is to consider 24 hours at a time, but to only implement the results for the first 8 hours. The IP is then re-solved for the next 24 hours after several hours have elapsed and new data are available, and so on. Initial attempts to solve ...

116 citations


Journal ArticleDOI
TL;DR: In this article, a dual CUSUM (DCUSUM) algorithm is proposed to detect the range of shifts in the CUSU chart. But the authors do not specify the exact value of the shift size and can only be reasonably assumed to vary within a certain range.
Abstract: Conventional quality control procedures, such as the CUmulative SUM (CUSUM) and exponentially weighted moving average charts are usually designed based on a mean shift with a given size. In practice, the exact value of the shift size is often unknown and can only be reasonably assumed to vary within a certain range. Such a range of shifts deteriorates the performance of existing control charts. In this paper, a quality control scheme, a Dual CUSUM (DCUSUM), is applied that combines two CUSUM charts to detect the range of shifts. The out-of-control signal is triggered if either one of the CUSUM statistics goes out of the DCUSUM control limits. In particular, a design procedure for the DCUSUM charts is developed and an analytical formula for the Average Run Length (ARL) calculation is obtained via the Markov chain method. The proposed DCUSUM charts are compared with the conventional CUSUM and combined Shewhart-CUSUM charts. Based on a proposed criterion, the integrated relative ARL, the proposed schemes sho...

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a methodology to optimally allocate tolerances to integrate product and process variables in multi-station manufacturing processes at minimum costs, which is based on the development and integration of three models: (i) the tolerance-variation relation, (ii) variation propagation, and (iii) process degradation.
Abstract: In multi-station manufacturing systems, the quality of final products is significantly affected by both product design as well as process variables. Historically, however, tolerance research has primarily focused on allocating tolerances based on the product design characteristics of each component. Currently, there are no analytical approaches to optimally allocate tolerances to integrate product and process variables in multi-station manufacturing processes at minimum costs. The concept of process-oriented tolerancing expands the current tolerancing practices, which bound errors related to product variables, to explicitly include process variables. The resulting methodology extends the concept of “part interchangeability” into “process interchangeability,” which is critical due to increasing requirements related to the selection of suppliers and benchmarking. The proposed methodology is based on the development and integration of three models: (i) the tolerance-variation relation; (ii) variation propagation; and (iii) process degradation. The tolerance-variation model is based on a pin-hole fixture mechanism in multi-station assembly processes. The variation propagation model utilizes a state space representation but uses a station index instead of a time index. Dynamic process effects such as tool wear are also incorporated into the framework of process-oriented tolerancing, which provides the capability to design tolerances for the whole life-cycle of a production system. The tolerances of process variables are optimally allocated through solving a nonlinear constrained optimization problem. An industry case study is used to illustrate the proposed approach.

Journal ArticleDOI
TL;DR: In this article, an approximation formula for predicting the optimal buffer allocation is developed based upon a two-moment approximation formula involving the expressions for M/M/1/K systems.
Abstract: The Buffer Allocation Problem (BAP) is a difficult stochastic, integer, nonlinear programming problem. In general, the objective function and constraints of the problem are not available in a closed form. An approximation formula for predicting the optimal buffer allocation is developed based upon a two-moment approximation formula involving the expressions for M/ M/1/ K systems. The closed-form expressions of the M/ M/1/ K and M/ G/1/ K systems are utilized for the BAP in series, merge, and splitting topologies of finite buffer queueing networks. Extensive computational results demonstrate the efficacy of the approach.

Journal ArticleDOI
TL;DR: In this paper, a general accelerated life model for step-stress testing and a general likelihood function formulation for step stress models that use both Weibull and lognormal distributions are presented.
Abstract: In this paper we propose a general accelerated life model for step-stress testing and present a general likelihood function formulation for step-stress models that use both Weibull and lognormal distributions. The proposed model is also applicable to any life distribution in which the stress level only changes the scale parameter of the distribution, and can also be extended to multiple-stress as well as profiled testing patterns. Algorithms for fitting and testing such models are described and illustrated. The model provides a convenient way to interpret the step-stress accelerated life testing results. The practical use of the proposed statistical inference model is demonstrated by a case study.

Book ChapterDOI
Hiroshi Konno1
TL;DR: Important properties of the mean–absolute deviation (MAD) portfolio optimization model, which was introduced in 1990 to cope with very large–scale portfolio optimization problems, are surveyed.
Abstract: We will survey important properties of the mean–absolute deviation (MAD) portfolio optimization model, which was introduced in 1990 to cope with very large–scale portfolio optimization problems. MAD model has been used for solving huge portfolio optimization models including internationally diversified investment model, long-term ALM model, mortgage–backed security portfolio optimization model. Also, the MAD model enjoys several nice theoretical properties. In particular, all CAPM type relations for mean–variance model hold for the MAD model as well. Further, the MAD model is more compatible to the fundamental principle of rational decision making.

Journal ArticleDOI
TL;DR: In this article, a control chart is proposed to monitor changes in the population variance-covariance matrix of a multivariate normal process when individual observations are collected, based on first taking the exponentially weighted moving average of the product of each observation and its transpose.
Abstract: A control chart is proposed to effectively monitor changes in the population variance-covariance matrix of a multivariate normal process when individual observations are collected. The proposed control chart is constructed based on first taking the exponentially weighted moving average of the product of each observation and its transpose. Appropriate statistics which are based on square distances between estimators and true parameters are then developed to detect changes in the variances and covariances of the variance-covariance matrix. The simulation studies show that the proposed control chart outperforms existing procedures in cases where either the variances or correlations increase or both increase. The improvement in performance of the proposed control chart is particularly notable when variables are strongly positively correlated. The proposed control chart is applied to a real-life example taken from the semiconductor industry.

Journal ArticleDOI
TL;DR: It is argued that sampling cost alone does not adequately characterize the efficiency of ranking-and-selection procedures, and the cost of switching among the simulations of the alternative systems should also be considered.
Abstract: Statistical Ranking and Selection (R&S) is a collection of experiment design and analysis techniques for selecting the “population” with the largest or smallest mean performance from among a finite set of alternatives. R&S procedures have received considerable research attention in the stochastic simulation community, and they have been incorporated in commercial simulation software. One of the ways that R&S procedures are evaluated and compared is via the expected number of samples (often replications) that must be generated to reach a decision. In this paper we argue that sampling cost alone does not adequately characterize the efficiency of ranking-and-selection procedures, and the cost of switching among the simulations of the alternative systems should also be considered. We introduce two new, adaptive procedures, the minimum switching sequential procedure and the multi-stage sequential procedure with tradeoff, that provide the same statistical guarantees as existing procedures and significantly redu...

Journal ArticleDOI
TL;DR: The results illustrate that IIMOM is effective in capturing different kinds of preference structures of the designer, and it provides a complete and effective solution for medium- and small-scale multiobjective optimization problems.
Abstract: In most practical situations involving reliability optimization, there are several mutually conflicting goals such as maximizing the system reliability and minimizing the cost, weight and volume. This paper develops an effective multiobjective optimization method, the Intelligent Interactive Multiobjective Optimization Method (IIMOM). In IIMOM, the general concept of the model parameter vector is proposed. From a practical point of view, a designer's preference structure model is built using Artificial Neural Networks (ANNs) with the model parameter vector as the input and the preference information articulated by the designer over representative samples from the Pareto frontier as the desired output. Then with the ANN model of the designer's preference structure as the objective, an optimization problem is solved to search for improved solutions and guide the interactive optimization process intelligently. IIMOM is applied to the reliability optimization problem of a multi-stage mixed system with five di...

Journal ArticleDOI
TL;DR: An exact approach based on the partition of the initial problem into two subproblems and the use of a simple local search heuristic to obtain an initial solution is proposed and shows an impressive improvement with respect to the computational time and space memory required by CPLEX 7.0.
Abstract: We consider a single-period mean-safety portfolio selection problem with transaction costs and integer constraints on the quantities selected for the securities (rounds). We propose an exact approach based on the partition of the initial problem into two subproblems and the use of a simple local search heuristic to obtain an initial solution. To the best of our knowledge, no optimal algorithms have been proposed in the literature for this problem. The proposed approach is simple, general and easily adaptable to other problems. An extensive experimental analysis based on real data from the main international Stock Exchange Markets is performed. The results show, on average, an impressive improvement with respect to the computational time and space memory required by CPLEX 7.0. We also show that the solution of the first subproblem can be used on its own as an extremely effective heuristic.

Journal ArticleDOI
TL;DR: In this article, the problem of single item inventory ordering for a number of stores whose demand fluctuates randomly is modeled as a cooperative game, and the players are the stores, and conditions on the holding and penalty costs that ensure subadditivity are given.
Abstract: A company considers centralizing a single item inventory ordering for a number of stores whose demand fluctuates randomly. First, there must be savings passed on to the stores from this centralization arrangement. Then, the savings must be divided among the participating stores in a way that no store (or a subset) will have an incentive to order separately. We model this problem as a cooperative game whose players are the stores. When holding and penalty shortage costs are identical for all subsets of stores, a game based on optimal expected costs (or the corresponding benefits) is subadditive (there are savings from centralization), and for normally distributed demands, whatever their correlations the core is never empty When the stores' holding and penalty costs differ, the corresponding game may have an empty core, and in fact, centralization may not be beneficial. We give conditions on the holding and penalty costs that ensure subadditivity. Given inventory centralization and a cost allocation game ba...

Journal ArticleDOI
TL;DR: In this article, a two-stage purchase contract with a demand forecast update is considered, where the buyer can adjust an initial commitment based on an updated demand forecast obtained at a later stage.
Abstract: We study a two-stage purchase contract with a demand forecast update. The purchase contract provides the buyer an opportunity to adjust an initial commitment based on an updated demand forecast obtained at a later stage. An adjustment, if any, incurs a fixed as well as a variable cost. Using a dynamic programming formulation, we obtain optimal solutions for a class of demand distributions. We also discuss how these results can be applied to gain managerial insights that help in making decisions regarding where to allocate efforts in improving the forecast quality and whether or not to sign a contract. Contributed by the Supply Chains/Production-Inventory Systems Department

Journal ArticleDOI
TL;DR: In this article, two multi-product dynamic lot size models with one-way substitution were considered, where the products can be indexed such that a lower-index product may be used to substitute for the demand of a higher-index item.
Abstract: We consider two multi-product dynamic lot size models with one-way substitution, where the products can be indexed such that a lower-index product may be used to substitute for the demand of a higher-index product. In the first model, the product used to meet the demand of another product must be physically transformed into the latter and incur a conversion cost. In the second model, a product can be directly used to satisfy the demand for another product without requiring any physical conversion. Both problems are generally computationally intractable. We develop dynamic programming algorithms that solve the problems in polynomial time when the number of products is fixed. A heuristic is also developed, and computational experiments are conducted to test the effectiveness of the heuristic and the efficiency of the optimal algorithm.

Journal ArticleDOI
TL;DR: A stochastic linear goal programming model for multistage portfolio management takes into account both the investment goal and risk control at each stage and a scenario generation method is proposed that acts as the basis of the portfolio management model.
Abstract: A stochastic linear goal programming model for multistage portfolio management is proposed. The model takes into account both the investment goal and risk control at each stage. A scenario generation method is proposed that acts as the basis of the portfolio management model. In particular, by matching the moments and fitting the descriptive features of the asset returns, a linear programming model is used to generate the single-stage scenarios. Scenarios for multistage portfolio management are generated by incorporating this single-stage method with the time-series model for the asset returns. Meanwhile, no arbitrage opportunity exists in the proposed method. A real case is solved via the goal programming model and the scenario generation approach which demonstrates the effectiveness of the model. We also comment on some practical issues of the approach.

Journal ArticleDOI
TL;DR: It is shown that some of the existing desirability function approaches can, in fact, be characterized as special forms of MOO and that various techniques developed in MOO can be successfully utilized to deal with MRO problems.
Abstract: A common problem encountered in product or process design is the selection of optimal parameter levels that involves the simultaneous consideration of multiple response characteristics, called a multi-response surface problem. Notwithstanding the importance of multi-response surface problems in practice, the development of an optimization scheme has received little attention. In this paper, we note that Multi-Response surface Optimization (MRO) can be viewed as a Multi-Objective Optimization (MOO) and that various techniques developed in MOO can be successfully utilized to deal with MRO problems. We also show that some of the existing desirability function approaches can, in fact, be characterized as special forms of MOO. We then demonstrate some MOO principles and methods in order to illustrate how these approaches can be employed to obtain more desirable solutions to MRO problems.

Journal ArticleDOI
TL;DR: In this article, the authors developed a real options approach to estimate the value of flexibility and to determine the optimum strategy to manage the flexibility under uncertainty in the currency exchange rate, and developed a Monte Carlo simulation technique that is able to incorporate a large number of variables into the valuation.
Abstract: Flexibility allows firms to compete more effectively in a world of short product life cycles, rapid product development, and substantial demand and/or price uncertainty. We develop a supply chain model in which a manufacturing firm can have the flexibility to select different suppliers, plant locations, and market regions and there can be an implementation time lag for the supply chain operations. We use a real options approach to estimate the value of flexibility and to determine the optimum strategy to manage the flexibility under uncertainty in the currency exchange rate. To price the operational flexibility, we develop a Monte Carlo simulation technique that is able to incorporate a large number of variables into the valuation. We show that without considering time lag impact, the value of the operational flexibility can be significantly overestimated.

Journal ArticleDOI
TL;DR: In this paper, coordinate sensor placement for the diagnosis of dimensional variation sources in assembly processes is addressed, and sensitivity indices for the detection of the process mean and variance components are derived in terms of process layout and sensor deployment information.
Abstract: In-process optical coordinate measuring machines offer the potential to diagnose the sources of the variations that are responsible for product quality defects. Such a sensor system can thus help manufacturers to improve product quality and reduce process downtime. The effective use of sensor data in the diagnosis of the sources of variations depends on the optimal design of the sensor system, which is often also called the problem of sensor placement. This paper addresses coordinate sensor placement for the diagnosis of dimensional variation sources in assembly processes. Sensitivity indices for the detection of the process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed for optimal experimental design, are revised and then used to maximize the detection sensitivity. A sort-and-cut procedure is proposed, which is able to significantly improve the algorithm efficiency of ...

Journal ArticleDOI
TL;DR: The results indicate that the charted statistics are correlated after parameter estimation, and it is shown that this correlation will increase as the mean shift in the input increases.
Abstract: The Cause-Selecting Chart (CSC) is a useful statistical process control tool for analyzing multistage processes. It distinguishes between incoming and outgoing quality problems by establishing a relationship between input and output measurements. In practice, the model relating the input and output must first be estimated before the CSC is implemented. Most previous work on CSCs has focused on the case when the model is estimated without error. Far less is known about the performance of CSCs with parameter estimation errors. This paper investigates the effect of parameter estimation errors on the performance of CSCs. The results indicate that the charted statistics are correlated after parameter estimation. It is also shown that this correlation will increase as the mean shift in the input increases. The implications for the use of CSCs when parameters are estimated are discussed, including the use of widened control limits and self-starting procedures.

Journal ArticleDOI
TL;DR: This paper derives closed-form analytical results to evaluate the performance of an AS/R system under stochastic demand and determine whether or not it meets throughput.
Abstract: In this paper we are concerned with the throughput performance of an Automated Storage/Retrieval (AS/R) system under stochastic demand, ie, the case where storage and retrieval requests arrive randomly Although AS/R systems have been the subject of extensive research, their performance under stochastic demand remains relatively unexplored In fact, with random storage and retrieval requests, the primary tool for AS/R system analysis has been simulation Assuming a particular dwell point strategy for the storage/retrieval machine, in this paper we derive closed-form analytical results to evaluate the performance of an AS/R system under stochastic demand and determine whether or not it meets throughput Although the results are derived for a given system, they can also be used in the design or evaluation of new/proposed systems

Journal ArticleDOI
TL;DR: A mixed-binary programming model is developed as an aid for generating mine production schedules in order to obtain coal of the desired quality and maximize the associated net present value.
Abstract: Recognizing the complexity of coal mining management, e.g., the scarcity of financial resources and a variation in the quality of coal found in different sections of a mine, in this paper, we develop a mixed-binary programming model as an aid for generating mine production schedules in order to obtain coal of the desired quality and maximize the associated net present value. The model is based on the definition of the mine layout as a precedence network, with the nodes representing mining sections. A general solution methodology based on the Benders' decomposition of the model is developed. It exploits the special nature of the resulting subproblems in order to develop an effective solution procedure. Computational experience of this procedure, along with the results of its application to a mining case, are presented.

Journal ArticleDOI
TL;DR: Data sets that are multivariate in nature are common in modern engineering, science, and business and in some analysis problems, univariate statistical methods applied “one variable at a time” (such a...
Abstract: Data sets that are multivariate in nature are common in modern engineering, science, and business. In some analysis problems, univariate statistical methods applied “one variable at a time” (such a...