scispace - formally typeset
Search or ask a question

Showing papers in "Iie Transactions in 2003"


Journal ArticleDOI
TL;DR: In this paper, Categorical Data Analysis, Second Edition, is presented for categorical data analysis, with a focus on the use of categorical information. pp. 583-584.
Abstract: (2003). Categorical Data Analysis, Second Edition. IIE Transactions: Vol. 35, No. 6, pp. 583-584.

675 citations


Journal ArticleDOI
TL;DR: In this paper, a fuzzy AHP with an extent analysis approach is proposed to determine the importance weights for the customer requirements, which can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP.
Abstract: In the Quality Function Deployment (QFD) process, determining the importance weights for the customer requirements is an essential and crucial process. The Analytic Hierarchy Process (AHP) has been used to determine the importance weights for product planning, but this has occurred mainly in crisp (non-fuzzy) decision applications. However, human judgment on the importance of customer requirements is always imprecise and vague. To make up for this deficiency in the AHP, a fuzzy AHP with an extent analysis approach is proposed to determine the importance weights for the customer requirements. In the method, triangular fuzzy numbers are used for the pairwise comparison of a fuzzy AHP. By using the extent analysis method and the principles for the comparison of fuzzy numbers, one can derive the weight vectors. The new approach can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP. Furthermore, the fuzzy AHP with extent analysis is simple and easy to i...

449 citations


Journal ArticleDOI
TL;DR: This study exploits the problem structure to derive upper bounds that are independent of job duration distribution type and presents new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and number of jobs.
Abstract: This study is concerned with the determination of optimal appointment times for a sequence of jobs with uncertain durations. Such appointment systems are used in many customer service applications to increase the utilization of resources, match workload to available capacity, and smooth the flow of customers. We show that the problem can be expressed as a two-stage stochastic linear program that includes the expected cost of customer waiting, server idling, and a cost of tardiness with respect to a chosen session length. We exploit the problem structure to derive upper bounds that are independent of job duration distribution type. These upper bounds are used in a variation of the standard L-shaped algorithm to obtain optimal solutions via successively finer partitions of the support of job durations. We present new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and numbe...

338 citations


Journal ArticleDOI
TL;DR: In this article, a Tabu search meta-heuristic has been developed and successfully demonstrated to provide solutions to the system reliability optimization problem of redundancy allocation, which generally involves the selection of components and redundancy levels to maximize system reliability given various system-level constraints.
Abstract: A tabu search meta-heuristic has been developed and successfully demonstrated to provide solutions to the system reliability optimization problem of redundancy allocation. Tabu search is particularly well-suited to this problem and it offers distinct advantages compared to alternative optimization methods. While there are many forms of the problem, the redundancy allocation problem generally involves the selection of components and redundancy levels to maximize system reliability given various system-level constraints. This is a common and extensively studied problem involving system design, reliability engineering and operations research. It is becoming increasingly important to develop efficient solutions to this reliability optimization problem because many telecommunications (and other) systems are becoming more complex, yet with short development schedules and very stringent reliability requirements. Tabu search can be applied to a more diverse problem domain compared to mathematical programming meth...

257 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the structure of the optimal solution as the basis for a simple closed-form heuristic for setting appointment times, which was shown to perform on average within 2% (and generally within 0.5%) of an optimal policy.
Abstract: Consider the problem facing a doctor (or other service provider) who is setting patient appointment times in the presence of random service times. He or she must balance the patients' waiting times (if the appointments are scheduled too closely together) against the doctor's idle time (if the appointments are spaced too far apart). Although this problem is fairly intractable, this paper uses the structure of the optimal solution as the basis for a simple closed-form heuristic for setting appointment times. Over a wide test bed of problems, this heuristic is shown to perform on average within 2% (and generally within 0.5%) of the optimal policy.

215 citations


Journal ArticleDOI
TL;DR: The methodology presented here is specifically developed to accommodate the case where there is a choice of redundancy strategy and can result in more reliable and cost-effective engineering designs.
Abstract: Optimal solutions to the redundancy allocation problem are determined when either active or cold-standby redundancy can be selectively chosen for individual subsystems. This problem involves the selection of components and redundancy levels to maximize system reliability. Previously, solutions to the problem could only be found if analysts were restricted to a predetermined redundancy strategy for the complete system. Generally, it had been assumed that active redundancy was to be used. However, in practice both active and cold-standby redundancy may be used within a particular system design and the choice of redundancy strategy becomes an additional decision variable. Available optimization algorithms are inadequate for these design problems and better alternatives are required. The methodology presented here is specifically developed to accommodate the case where there is a choice of redundancy strategy. The problem is formulated with imperfect sensing and switching of cold-standby redundant components ...

207 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an integrated model that simultaneously determines production scheduling and preventive maintenance planning decisions so that the total weighted tardiness of jobs is minimized, and investigated the benefits of integration through a numerical study of small problems.
Abstract: Production scheduling and preventive maintenance planning decisions are inter-dependent but most often made independently. Given that maintenance affects available production time and elapsed production time affects the probability of machine failure, this inter-dependency seems to be overlooked in the literature. We propose an integrated model that simultaneously determines production scheduling and preventive maintenance planning decisions so that the total weighted tardiness of jobs is minimized. We investigate the benefits of integration through a numerical study of small problems. We compare the integrated solution and its performance with the solutions obtained from solving the production scheduling and preventive maintenance planning problems independently. The numerical results show an average reduction of 30% in expected total weighted tardiness. Finally, we discuss the issues related to solving larger problems and extensions for future study.

163 citations


Journal ArticleDOI
TL;DR: In this article, the problem of assembly line design with parallel stations can be treated as a special case of equipment selection for an assembly line, and two problem formulations, minimizing the number of stations, and minimizing the total cost, are discussed.
Abstract: This paper studies the problem of assembly line design, focusing on station paralleling and equipment selection. Two problem formulations, minimizing the number of stations, and minimizing the total cost, are discussed. The latter formulation is demonstrated by several examples, for different assembly system conditions: labor intensive or equipment intensive, and with task times that may exceed the required cycle time. It is shown that the problem of assembly system design with parallel stations can be treated as a special case of the problem of equipment selection for an assembly line. A branch and bound optimal algorithm developed for the equipment selection problem is adapted to solve the parallel station problem. Experiments are designed to investigate and demonstrate the influence of system parameters, such as assembly sequence flexibility and cycle time, on the balancing improvement due to station paralleling. An ILP formulation is developed for the combined problem of station paralleling with equip...

162 citations


Journal ArticleDOI
TL;DR: Bootstrap Methods: A Practical guidelines for the use of the bootstrap method are given in this paper, along with an overview of the application of bootstrap methods to a wide range of problems.
Abstract: What was originally intended as a companion to Phillip Good’s Permutation Tests has arrived in Bootstrap Methods by Michael R. Chernick, for the Wiley Series in Probability and Statistics. It is a practitioner’s guide through extensive applications that also includes a detailed literature review to motivate advances in the subject. The book begins by generally answering, “What is Bootstrapping?” in Chapter 1 with a background, an introduction and mention of applications. The reader is provided here with a basic explanation of Efron’s bootstrap methods and previews their application to topics such as censored and missing data, subset selection, kriging and p-value adjustment that are discussed in more detail in Chapter 8. An interesting historical perspective and a comprehensive literature review are also presented. Chapters 2 (“Estimation”) and 3 (“Confidence Sets and Hypothesis Testing”) are an update of Efron and Tibshirani’s An Introduction to the Bootstrap. The estimation chapter covers the classification problem and location and dispersion parameters. Section 3.1 deals with nonparametric confidence intervals: the percentile method, bootstrap iteration and the bootstrap method. Section 3.2 connects confidence intervals and hypothesis tests. The clinical trial analysis in Section 3.3 is illustrative (and illustrated). The topic of Chapter 4 is “Regression Analysis.” The bootstrap is shown to help determine confidence and prediction intervals for non-Gaussian error, both in the linear and nonlinear cases. There is also a section on nonparametric models. “Forecasting and Time Series Analysis” are approached in Chapter 5 before the provocative question “Which Resampling Method Should You Use” is posed in Chapter 6. Traditional methods, like the jackknife, crossvalidation and the delta method, are described in this chapter, along with variants of the bootstrap, similarities, differences and preferences. In Chapter 7 (“Efficient and Effective Simulation”) Hall’s bootstrap and the Edgeworth expansion basis for suggesting a number of Monte Carlo iterations are summarized. Practical guidelines are provided. Added special topics in Chapter 8 are determining the number of distributions in a mixture model, process capability indices and point processes. The last chapter highlights one purpose (and another strength) of the book: “to exhibit counterexamples to the consistency of bootstrap estimates so that the reader will be aware of the limitations of the methods.” Cautionary examples are described where bootstrapping fails. The book is chiefly an introduction to resampling procedures that affords the reader just enough theoretical development for a clear and practical understanding of the bootstrap. Still, the statistician with a more advanced theoretical background and/or more experience in bootstrap methods should appreciate the book’s extensive bibliography and chapter-ending historical notes.

142 citations


Journal ArticleDOI
TL;DR: A Bayesian approach to probabilistic input modeling for simulation experiments that accounts for the parameter and stochastic uncertainties inherent in most simulations and that yields valid predictive inferences about outputs of interest is formulated and evaluated.
Abstract: We formulate and evaluate a Bayesian approach to probabilistic input modeling for simulation experiments that accounts for the parameter and stochastic uncertainties inherent in most simulations and that yields valid predictive inferences about outputs of interest. We use prior information to construct prior distributions on the parameters of the input processes driving the simulation. Using Bayes' rule, we combine this prior information with the likelihood function of sample data observed on the input processes to compute the posterior parameter distributions. In our Bayesian simulation replication algorithm, we estimate parameter uncertainty by independently sampling new values of the input-model parameters from their posterior distributions on selected simulation runs; and we estimate stochastic uncertainty by performing multiple (conditionally) independent runs with each set of parameter values. We formulate performance measures relevant to both Bayesian and frequentist input-modeling techniques, and ...

125 citations


Journal ArticleDOI
TL;DR: This incomparable volume has important theoretical underpinnings and a depth of discussion that is seldom found in similar books, destined to remain the definitive reference on statistical quality for many years to come.
Abstract: (2003). Reliability of Computer Systems and Networks Fault Tolerance, Analysis, and Design. IIE Transactions: Vol. 35, No. 6, pp. 586-587.

Journal ArticleDOI
TL;DR: In this article, an approach based on ant techniques is presented to effectively address the assembly line balancing problem with the complicating factors of parallel workstations, stochastic task durations, and mixed-models.
Abstract: This paper presents an approach, based on ant techniques, to effectively address the assembly line balancing problem with the complicating factors of parallel workstations, stochastic task durations, and mixed-models. A methodology was inspired by the behavior of social insects in an attempt to distribute tasks among workers so that strategic performance measures are optimized. This methodology is used to address several assembly line balancing problems from the literature. The assembly line layouts obtained from these solutions are used for simulated production runs so that output performance measures (such as cycle time performance) are obtained. Output performance measures resulting from this approach are compared to output performance measures obtained from several other heuristics, such as simulated annealing. A comparison shows that the ant approach is competitive with the other heuristic methods in terms of these performance measures.

Journal ArticleDOI
TL;DR: In this paper, two competing data mining techniques, nonlinear regression analysis and computational neural networks, are applied in developing the empirical models, and the values of surface roughness predicted by these models are then compared with those from some of the representative models in the literature.
Abstract: Surface roughness plays an important role in product quality and manufacturing process planning. This research focuses on developing an empirical model for surface roughness prediction in finish turning. The model considers the following working parameters: work-piece hardness (material), feed, cutter nose radius, spindle speed and depth of cut. Two competing data mining techniques, nonlinear regression analysis and computational neural networks, are applied in developing the empirical models. The values of surface roughness predicted by these models are then compared with those from some of the representative models in the literature. Metal cutting experiments and tests of hypothesis demonstrate that the models developed in this research have a satisfactory goodness of fit. It has also presented a rigorous procedure for model validation and model comparison. In addition, some future research directions are outlined.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a two-stage model of a manufacturing supply chain, which features capacitated production in two stages, and a fixed cost (or concave cost) for transporting the product between the stages.
Abstract: We develop a two stage model of a manufacturing supply chain. This two stage production transportation model features capacitated production in two stages, and a fixed cost (or concave cost) for transporting the product between the stages. We prove several properties of this model, which we call the Two Stage Production Distribution Problem (2SPDP) model. By placing "non-speculative" assumptions on production and transportation, we show that our model reduces to a related model, with one capacitated production stage with linear production cost, and transportation between two inventory locations with non-linear transportation cost. Finally, we present polynomial algorithms for this model under several different transportation cost structures and capacity assumptions.

Journal ArticleDOI
TL;DR: This paper incorporates the Tabu Search heuristic into the NP framework and demonstrates through numerical examples that using the hybrid method results in superior solutions for buffer allocation problems in large production lines.
Abstract: The optimal allocation of buffers in production lines is an important research issue in the design of a manufacturing system. We present a new hybrid algorithm for this complex design problem: the hybrid Nested Partitions (NP) and Tabu Search (TS) method. The Nested Partitions method is globally convergent and can utilize many of the existing heuristic methods to speed up its convergence. In this paper, we incorporate the Tabu Search heuristic into the NP framework and demonstrate through numerical examples that using the hybrid method results in superior solutions. Our numerical results illustrate that the new algorithm is very efficient for buffer allocation problems in large production lines.

Journal ArticleDOI
TL;DR: In this article, the authors study the effect of the bullwhip effect in an order-up-to-supply-chain system when minimum mean square error (MSE) optimal forecasting is employed as opposed to some commonly used simplistic forecasting schemes.
Abstract: In the supply chain management area, there has much recent attention to a phenomenon known as the bullwhip effect. The bullwhip effect represents the situation where demand variability increases as one moves up the supply chain. In this paper, we study this effect in an order-up-to supply-chain system when minimum Mean Square Error (MSE) optimal forecasting is employed as opposed to some commonly used simplistic forecasting schemes. We find that depending on the correlative structure of the demand process it is possible to reduce, or even eliminate (i.e., "de-whip"), the bullwhip effect in such a system by using an MSE-optimal forecasting scheme. Beyond the bullwhip effect, we also determine the exact time-series nature of the upstream demand processes.

Journal ArticleDOI
TL;DR: In this article, it was shown that increased variability of individual demands actually reduces the benefits of risk pooling, and that the greater the demand variability, the larger the benefit of consolidation.
Abstract: The benefits of pooling risks, manifested in inventory management by consolidating multiple random demands in one location, are well known. What is less well understood are the determinants of the magnitude of the savings. Recently there has been speculation about the impact of demand variabilities on the benefits of risk pooling. We provide an example where increased variability of the individual demands actually reduces the benefits of risk pooling. We prove, however, that if we restrict increased variability to a common linear transformation, the greater the demand variabilities the larger the benefits of consolidating them, in agreement with intuition. We also provide bounds on the benefits of the consolidation of demands. Our results do not require independence of the demands, apply to any number of pooled demands, and are obtained in a pure cost-driven model.

Journal ArticleDOI
TL;DR: A heuristic to solve a real-life identical parallel machine scheduling problem with sequence-dependent setup times and job splitting to minimize makespan and develops a lower bound and evaluates the performances of the heuristic on a large number of randomly generated instances.
Abstract: In this paper, we consider a simplified real-life identical parallel machine scheduling problem with sequence-dependent setup times and job splitting to minimize makespan. We propose a heuristic to solve this problem. Our method is composed of two parts. The problem is first reduced into a single machine scheduling problem with sequence-dependent setup times. This reduced problem can be transformed into a Traveling Salesman Problem (TSP), which can be efficiently solved using Little's method. In the second part, a feasible initial solution to the original problem is obtained by exploiting the results of the first part. This initial solution is then improved in a step by step manner, taking into account the setup times and job splitting. We develop a lower bound and evaluate the performances of our heuristic on a large number of randomly generated instances. The solution given by our heuristic is less than 4.88% from the lower bound.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the non-cooperative competition of two make-to-order firms with different service capacities and different values of service, and obtain sufficient conditions for the existence of a Nash equilibrium.
Abstract: In recent years, there has been considerable research on price competition in a market where customers are sensitive to production or service delays. Most of these works assume identical firms with only different service speeds (capacities), and find that the firm with the higher speed can usually charge a premium price and take a larger market share. We consider the (non-cooperative) competition of two make-to-order firms. In addition to different service capacities, the competing firms may provide different values of service, and have firm-dependent unit costs of waiting. We obtain sufficient conditions for the existence of a Nash equilibrium, and we characterize the equilibrium analytically for some cases and numerically for some other cases. Our results confirm that the firm with the higher speed of service can usually charge a premium price and does take a larger market share. In addition, we find that the firm with the higher value of service and lower cost of waiting can usually charge a premium pr...

Journal ArticleDOI
TL;DR: This paper reports on the work done to implement statistical error control within a heuristic search procedure, and on an automated procedure to deliver a statistical guarantee after the search procedure is finished.
Abstract: Research on the optimization of stochastic systems via simulation often centers on the development of algorithms for which global convergence can be guaranteed. On the other hand, commercial software applications that perform optimation via simulation typically employ search heuristics that have been successful in deterministic settings. Such search heuristics give up on global convergence in order to be more generally applicable and to yield rapid progress towards good solutions. Unfortunately, commercial applications do not always formally account for the randomness in simulation responses, meaning that their progress may be no better than a random search if the variability of the outputs is high. In addition, they do not provide statistical guarantees about the "goodness" of the final results. In practice, simulation studies often rely heavily on engineers who, in addition to developing the simulation model and generating the alternatives to be compared, must also perform the statistical analyses off-l...

Journal ArticleDOI
TL;DR: A general model for adaptive c, np, u and p control charts in which one, two or three design parameters (sample size, sampling interval and control limit width) switch between two values, according to the most recent process information is developed.
Abstract: We develop a general model for adaptive c , np , u and p control charts in which one, two or three design parameters (sample size, sampling interval and control limit width) switch between two values, according to the most recent process information. For a given in-control average sampling rate and a given false alarm rate, the adaptive chart detects changes in the process much faster than a chart with fixed parameters. Moreover, this study also offers general guidance on how to choose an effective design.

Journal ArticleDOI
TL;DR: A structure and architecture for the rapid realization of a simulation-based real-time shop floor control system for a discrete part manufacturing system is presented and has been implemented and tested for six manufacturing systems.
Abstract: In this paper, a structure and architecture for the rapid realization of a simulation-based real-time shop floor control system for a discrete part manufacturing system is presented. The research focuses on automatic simulation model and execution system generation from a production resource model. An Automatic Execution Model Generator (AEMG) has been designed and implemented for generating a Message-based Part State Graph (MPSG)-based shop level execution model. An Automatic Simulation Model Generator (ASMG) has been designed and implemented for generating an Arena simulation model based on a resource model (MS Access 97) and an MPSG-based shop level execution model. A commercial finite capacity scheduler, Tempo, has been used to provide schedule information for the Arena simulation model. This research has been implemented and tested for six manufacturing systems, including The Pennsylvania State University CIM laboratory.

Journal ArticleDOI
TL;DR: In this paper, the duality, equivalence, and dominance are used in evaluation of system state distribution of multi-state consecutive k-out-of-n systems, where the concept of dominance is used to characterize the properties of multistate systems.
Abstract: In the binary context, a consecutive- k -out-of- n system is failed if and only if at least k consecutive components are failed. In this paper we propose definitions of the multi-state consecutive- k -out-of- n :F and G systems. In the proposed definition, both the system and its components may be in one of M + 1 possible states: 0, 1,..., and M . The dual relationship between the proposed systems is identified. The concept of dominance is used to characterize the properties of multi-state systems. The concepts of duality, equivalence, and dominance are used in evaluation of system state distribution of multi-state consecutive- k -out-of- n systems. An algorithm is provided for evaluating system state distribution of decreasing multi-state consecutive- k -out-of- n :F systems. Another algorithm is provided to bound system state distribution of multi-state consecutive- k -out-of- n :F and G systems. Several examples are included to illustrate the proposed definitions, concepts, and algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the dynamic crane deployment problem with the objective of determining the crane deployment frequency and routes over a planning horizon to minimize the total workload overflow, and a heuristic algorithm was then developed to solve problems of practical sizes.
Abstract: Rubber Tired Gantry Cranes (RTGCs) are the most widely used pieces of equipment in the Hong Kong sea-freight container yards. Workload distribution in the yard changes continuously over time. The dynamic deployment of RTGCs is an important issue in yard operation management. This paper investigates the dynamic crane deployment problem with the objective of determining the crane deployment frequency and routes over a planning horizon to minimize the total workload overflow. The problem is formulated as a mixed integer programming model. A heuristic algorithm is then developed to solve problems of practical sizes. The heuristic quickly finds a near optimal solution for crane deployment operation.

Journal ArticleDOI
TL;DR: In this article, a tabu-search-based algorithm was proposed to find a combination of a production plan and schedule that are feasible and that approximately optimize the objective function (involving the overproduction and underproduction of finished automobiles, the set-up cost, the idle times of workcells on the line, the makespan and the load deviations among work-cells).
Abstract: We address the closely related problems of production planning and scheduling on mixed model automobile assembly lines. We propose an integrated solution, in which a production plan that is feasible with respect to aggregate capacity constraints is developed and then a sequence that is feasible with respect to this plan is sought. We propose three tabu-search-based algorithms that explore the solution spaces for both problems to different degrees to find a combination of a production plan and schedule that are feasible and that approximately optimize the objective function (involving the overproduction and underproduction of finished automobiles, the set-up cost, the idle times of work-cells on the line, the makespan and the load deviations among work-cells). Simulation is used to evaluate alternative schedules. Stochastic extensions are proposed and the complexities of these algorithms are discussed. Example runs comparing the algorithms are presented for deterministic cases, stochastic cases, types of a...

Journal ArticleDOI
TL;DR: In this paper, the authors investigate how the degree of correlation affects the increase in the mean lifetime for parallel redundancy when the two components are positively quadrant dependent and derive various bounds.
Abstract: Parallel redundancy is a common approach to increase system reliability and mean time to failure. When studying systems with redundant components, it is usually assumed that the components are independent; however, this assumption is seldom valid in practice. In the case of dependent components, the effectiveness of adding a component may be quite different from the case of independent components. In this paper we investigate how the degree of correlation affects the increase in the mean lifetime for parallel redundancy when the two components are positively quadrant dependent. A number of bivariate distributions that can be used in the modeling of dependent components are compared. Various bounds are also derived. The results are useful in reliability analysis as well as for designers who are required to take into account the possible dependence among the components.

Journal ArticleDOI
TL;DR: In this article, the authors study the problem of balancing two desirable but conflicting objectives in the newsvendor model and propose a more flexible satisficing objective where the target does not have to be prespecified.
Abstract: In this paper we study the problem of balancing two desirable but conflicting objectives in the newsvendor model. The standard objective in the newsvendor model is the expected profit maximization. Another objective (known as the "satisficing"--or, "aspiration-level"--objective) that has been studied in the literature is the probability of exceeding a prespecified and fixed target profit level. Since it may not always be obvious what the fixed target profit level should be, we introduce a more flexible satisficing objective where the target does not have to be prespecified. Our satisficing/aspiration-level objective is defined as the probability of exceeding the expected profit and it is a "moving" target that is a function of the order quantity. We provide a discussion of the properties of the newly introduced probability maximization objective. As a departure from previous work where the individual objectives were considered in isolation, in this paper we develop a model that unifies and integrates the ...

Journal ArticleDOI
TL;DR: In this article, the deterministic demand buyer-vendor coordination problem is generalized to simultaneously consider cargo capacity constraints and general inbound/outbound transportation costs, and the error bounds for these heuristics are 6 and 25%, respectively.
Abstract: In this paper, we generalize the deterministic demand buyer-vendor coordination problem to simultaneously consider cargo capacity constraints and general inbound/outbound transportation costs. To this end, we first consider a replenishment cost structure that includes a fixed cost as well as a stepwise inbound freight cost for the vendor. We then extend our results to consider the case where both the vendor and the buyer are subject to this freight cost structure. Hence, in the second model, both the inbound and the outbound transportation costs/constraints of the buyer-vendor pair are modeled explicitly. For each case, we provide heuristic algorithms with promising error bounds. The error bounds for these heuristic methods are 6 and 25%, respectively. Using the costs of these heuristics as upper bounds, we also develop finite time exact solution procedures.

Journal ArticleDOI
TL;DR: Based on the structure of the hedging point policy, a parameterized near-optimal production policy for a multiple-product manufacturing system is proposed in this article, where the analytical formalism is combined with simulation-based statistical tools, such as experimental design and response surface methodology, to provide an approximation of the optimal control policy.
Abstract: This paper deals with the issue of production control in a manufacturing system with multiple machines which are subject to breakdowns and repairs. The control variables considered are the production rates for different products on the machines. Our objective is to minimize the expected total discounted cost due to the finished good inventories and backlogs. Based on the structure of the hedging point policy, a parameterized near-optimal production policy for a multiple-product manufacturing system is proposed. The analytical formalism is combined with simulation-based statistical tools, such as experimental design and response surface methodology. The aim of such a combination is to provide an approximation of the optimal control policy. In the proposed approach, the parameterized near-optimal control policy is used as an input for the simulation model. For each entry consisting of a combination of parameters, the cost incurred is obtained. The significant effects of the control variables are de...

Journal ArticleDOI
TL;DR: It is proved that a property of this problem, called multimodularity, that ensures that a local search algorithm terminates with a globally optimal solution.
Abstract: We study a shift scheduling problem for call centers with an overall service level objective. We prove a property of this problem, called multimodularity, that ensures that a local search algorithm terminates with a globally optimal solution. We report on computations performed using real call center data.