scispace - formally typeset
Search or ask a question

Showing papers in "Iie Transactions in 2010"


Journal ArticleDOI
TL;DR: In this article, a general reliability model is developed based on degradation and random shock modeling, which is then extended to a specific model for a linear degradation path and normally distributed shock load sizes and damage sizes.
Abstract: For complex systems that experience Multiple Dependent Competing Failure Processes (MDCFP), the dependency among the failure processes presents challenging issues in reliability modeling. This article, develops reliability models and preventive maintenance policies for systems subject to MDCFP. Specifically, two dependent/correlated failure processes are considered: soft failures caused jointly by continuous smooth degradation and additional abrupt degradation damage due to a shock process and catastrophic failures caused by an abrupt and sudden stress from the same shock process. A general reliability model is developed based on degradation and random shock modeling (i.e., extreme and cumulative shock models), which is then extended to a specific model for a linear degradation path and normally distributed shock load sizes and damage sizes. A preventive maintenance policy using periodic inspection is also developed by minimizing the average long-run maintenance cost rate. The developed reliability and ma...

314 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the value of portfolio procurement in a supply chain, where a buyer can either procure parts for future demand from sellers using fixed price contracts or, option contracts or tap into the market for spot purchases.
Abstract: This article considers the value of portfolio procurement in a supply chain, where a buyer can either procure parts for future demand from sellers using fixed price contracts or, option contracts or tap into the market for spot purchases. A single-period portfolio procurement problem when both the product demand and the spot price are random (and possibly correlated) is examined and the optimal portfolio procurement strategy for the buyer is constructed. A shortest-monotone path algorithm is provided for the general problem to obtain the optimal procurement solution and the resulting expected minimum procurement cost. In the event that demand and spot price are independent, the solution algorithm simplifies considerably. More interestingly, the optimal procurement cost function in this case has an intuitive geometrical interpretation that facilitates managerial insights. The portfolio effect, i.e., the benefit of portfolio contract procurement over a single contract procurement is also studied. Finally, a...

147 citations


Journal ArticleDOI
TL;DR: In this article, the authors show how delay propagation can be reduced by redistributing existing slack in the planning process, making minor modifications to the flight schedule while leaving the original fleeting and crew scheduling decisions unchanged.
Abstract: Passenger airline delays have received increasing attention over the past several years as air space congestion, severe weather, mechanical problems, and other sources cause substantial disruptions to a planned flight schedule. Adding to this challenge is the fact that each flight delay can propagate to disrupt subsequent downstream flights that await the delayed flight's aircraft and crew. This potential for delays to propagate is exacerbated by a fundamental conflict: slack in the planned schedule is often viewed as undesirable, as it implies missed opportunities to utilize costly perishable resources, whereas slack is critical in operations as a means for absorbing disruption. This article shows how delay propagation can be reduced by redistributing existing slack in the planning process, making minor modifications to the flight schedule while leaving the original fleeting and crew scheduling decisions unchanged. Computational results based on data from a major U.S. carrier are presented that show that...

136 citations


Journal ArticleDOI
TL;DR: This article considers a multi-objective Ranking and Selection (R+S) problem, where the system designs are evaluated in terms of more than one performance measure, and the concept of Pareto optimality is incorporated into the R+S scheme.
Abstract: This article considers a multi-objective Ranking and Selection (R+S) problem, where the system designs are evaluated in terms of more than one performance measure. The concept of Pareto optimality is incorporated into the R+S scheme, and attempts are made to find all of the non-dominated designs rather than a single “best” one. In addition to a performance index to measure how non-dominated a design is, two types of errors are defined to measure the probabilities that designs in the true Pareto/non-Pareto sets are dominated/non-dominated based on observed performance. Asymptotic allocation rules are derived for simulation replications based on a Lagrangian relaxation method, under the assumption that an arbitrarily large simulation budget is available. Finally, a simple sequential procedure is proposed to allocate the simulation replications based on the asymptotic allocation rules. Computational results show that the proposed solution framework is efficient when compared to several other algorithms in te...

133 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered a multi-product closed-loop logistics network design problem with hybrid manufacturing/remanufacturing facilities and finite-capacity hybrid distribution/collection centers to serve a set of retail locations.
Abstract: This article considers a multi-product closed-loop logistics network design problem with hybrid manufacturing/remanufacturing facilities and finite-capacity hybrid distribution/collection centers to serve a set of retail locations. First, a mixed integer linear program is presented that determines the optimal solution that characterizes facility locations, along with the integrated forward and reverse flows such that the total cost of facility location, processing, and transportation associated with forward and reverse flows in the network is minimized. Second, a solution method based on Benders' decomposition with strengthened Benders' cuts for improved computational efficiencies is devised. In addition to this method, an alternative formulation is presented and a new dual solution method for the associated Benders' decomposition to obtain a different set of strengthened Benders' cuts is developed. In the Benders' decomposition framework, the strengthened cuts obtained from original and alternative formu...

124 citations


Journal ArticleDOI
TL;DR: A myopic scheduling algorithm with an optimal stopping criteria for this problem assuming exponential service times already exists in the literature is developed and numerical techniques for general service time distributions are developed.
Abstract: A sequential clinical scheduling method for patients with general service time distributions is developed in this paper. Patients call a medical clinic to request an appointment with their physician. During the call, the scheduler assigns the patient to an available slot in the physician's schedule. This is communicated to the patient before the call terminates and, thus, the schedule is constructed sequentially. In practice, there is very limited opportunity to adjust the schedule once the complete set of patients is known. Scheduled patients might not attend, that is, they might “no-show,” and the service times of those attending are random. A myopic scheduling algorithm with an optimal stopping criteria for this problem assuming exponential service times already exists in the literature. This work relaxes this assumption and develops numerical techniques for general service time distributions. A special case in which service times are gamma distributed is considered and it is shown that computation is ...

115 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a hybrid degradation-based reliability model for a single-unit system whose degradation is driven by a semi-Markov environment, which combines environmental data and stochastic failure models to assess the current or future health of the system.
Abstract: This article presents hybrid, degradation-based reliability models for a single-unit system whose degradation is driven by a semi-Markov environment. The primary objective is to develop a mathematical framework and associated computational techniques that unite environmental data and stochastic failure models to assess the current or future health of the system. By employing phase-type distributions, it is possible to construct a surrogate environment process that is amenable to analysis by exact Markovian techniques to obtain reliability estimates. The viability of the proposed approach and the quality of the approximations are demonstrated in two numerical experiments. The numerical results indicate that remarkably accurate lifetime distribution and moment approximations are attainable.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors present analytical methods to analyze split and merge production systems with exponential machine reliability models, operating under circulate, strictly circulate, priority, and percentage split/merge policies.
Abstract: Production split and merge operations are widely used in many manufacturing systems to increase production capacity and variety, improve product quality, and carry out scheduling and control activities. This article presents analytical methods to analyze split and merge production systems with exponential machine reliability models, operating under circulate, strictly circulate, priority, and percentage split/merge policies. In addition to developing the recursive procedures for performance analysis, the structural properties of the systems and the impacts of routing policies on system performance are investigated.

95 citations


Journal ArticleDOI
TL;DR: In this article, a methodology for designing a network of preventive healthcare facilities so as to improve its accessibility to potential clients and thus maximize participation in preventive healthcare programs is presented, where the problem is formulated as a mathematical program with equilibrium constraints.
Abstract: Preventive healthcare aims at reducing the likelihood and severity of potentially life-threatening illnesses by protection and early detection. The level of participation in preventive healthcare programs is a critical determinant in terms of their effectiveness and efficiency. This article presents a methodology for designing a network of preventive healthcare facilities so as to improve its accessibility to potential clients and thus maximize participation in preventive healthcare programs. The problem is formulated as a mathematical program with equilibrium constraints; i.e., a bilevel non-linear optimization model. The lower level problem which determines the allocation of clients to facilities is formulated as a variational inequality; the upper level is a facility location and capacity allocation problem. The developed solution approach is based on the location–allocation framework. The variational inequality is formulated as a convex optimization problem, which can be solved by the gradient project...

88 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a synthetic double sampling chart that integrates the double sampling (DS) chart and the conforming run length chart, which offers performance improvements in terms of the zero-state Average Run Length (ARL) and Average Number of Observations to Sample (ANOS).
Abstract: This article proposes a synthetic double sampling chart that integrates the Double Sampling (DS) chart and the conforming run length chart. The proposed procedure offers performance improvements in terms of the zero-state Average Run Length (ARL) and Average Number of Observations to Sample (ANOS). When the size of a mean shift δ (given in terms of the number of standard deviation units) is small (i.e., between 0.4 and 0.6) and the mean sample size n= 5, the proposed procedure reduces the out-of-control ARL and ANOS values by nearly half, compared with both the synthetic and DS charts. In terms of detection ability versus the Exponentially Weighted Moving Average (EWMA) chart, the synthetic DS chart is superior to the synthetic or even the DS chart, as the former outperforms the EWMA chart for a larger range of δ values compared to the latter. The proposed procedure generally outperforms the EWMA chart in the detection of a mean shift when δ is larger than 0.5 and n= 5 or 10. Although the proposed procedu...

85 citations


Journal ArticleDOI
TL;DR: In this paper, a module-based service model is proposed to facilitate customized service design and represent the relationships between functions and processes in a service, and a module selection problem for service selection is formulated using game theory.
Abstract: Service science research seeks to improve the productivity and quality of service offerings by creating new innovations, facilitating business management, and applying practical applications Recent trends seek to apply and extend principles from product family design and mass customization into new service development Product family design is a cost-effective way to achieve mass customization by allowing highly differentiated products to be developed from a common platform while targeting individual products to distinct market segments This article extends concepts from module-based product families to create a method for service design The objective in this research is to develop a method for designing customized families of services using game theory to model situations involving dynamic market environments A module-based service model is proposed to facilitate customized service design and represent the relationships between functions and processes in a service A module selection problem for plat

Journal ArticleDOI
TL;DR: In this article, an analytical model for computing the expected long-run average cost of a consolidation system implementing a TQ-based policy is developed, and the cost expression is used to analyze the optimal TQbased policy parameters.
Abstract: The logistics literature reports that three different types of shipment consolidation policies are popular in current practice. These are time-based, quantity-based and Time-and-Quantity (TQ)-based consolidation policies. Although time-based and quantity-based policies have been studied via analytical modeling, to the best of the authors knowledge, there is no exact analytical model for computing the optimal TQ-based policy parameters. Considering the case of stochastic demand/order arrivals, an analytical model for computing the expected long-run average cost of a consolidation system implementing a TQ-based policy is developed. The cost expression is used to analyze the optimal TQ-based policy parameters. The presented analytical results prove that: (i) the optimal TQ-based policy outperforms the optimal time-based policy; and (ii) the optimal quantity-based policy is superior to the other two (i.e., optimal time-based and TQ-based) policies in terms of cost. Considering the expected maximum waiting tim...

Journal ArticleDOI
TL;DR: In this article, a new methodology for assessing warehouse technical efficiency based on empirical data integrating several statistical approaches and the new results derived from applying the method to a large sample of warehouses.
Abstract: Warehouses are a substantial component of logistic operations and an important contributor to speed and cost in supply chains. While there are widely accepted benchmarks for individual warehouse functions such as order picking, little is known about the overall technical efficiency of warehouses. Lacking a general understanding of warehouse technical efficiency and the associated causal factors limits industry's ability to identify the best opportunities for improving warehouse performance. The problem is compounded by the significant gap in the education and training of the industry's professionals. This article addresses this gap by describing both a new methodology for assessing warehouse technical efficiency based on empirical data integrating several statistical approaches and the new results derived from applying the method to a large sample of warehouses. The self-reported nature of attributes and performance data makes the use of statistical methods for rectifying data, validating models, and iden...

Journal ArticleDOI
TL;DR: Transitional Poisson regression models were used to obtain one-day-ahead arrival forecasts of respiratory, influenza, diarrhea, and abdominal pain syndrome groupings and adapted Cumulative Sum (CUSUM) and Exponentially Weighted Moving Average (EWMA) plans for monitoring non-homogeneous counts.
Abstract: Daily counts of computer records of hospital emergency department arrivals grouped according to diagnosis (called here syndrome groupings) can be monitored by epidemiologists for changes in frequency that could provide early warning of bioterrorism events or naturally occurring disease outbreaks and epidemics. This type of public health surveillance is sometimes called syndromic surveillance. We used transitional Poisson regression models to obtain one-day-ahead arrival forecasts. Regression parameter estimates and forecasts were updated for each day using the latest 365 days of data. The resulting time series of recursive estimates of parameters such as the amplitude and location of the seasonal peaks as well as the one-day-ahead forecasts and forecast errors can be monitored to understand changes in epidemiology of each syndrome grouping. The counts for each syndrome grouping were autocorrelated and non-homogeneous Poisson. As such, the main methodological contribution of the article is the adaptation o...

Journal ArticleDOI
TL;DR: In this article, a Branch-and-Price (B&P) algorithm is developed that uses two different branching strategies and generates new rosters as needed to find high-quality rosters, using data provided by an anesthesia department of an 1100-bed hospital as well as an extensive set of randomly generated test instances for 15 and...
Abstract: A methodology is presented to solve the flexible shift scheduling problem of physicians when hospital administrators can exploit flexible start times, variable shift lengths, and overtime to cover demand. The objective is to minimize the total assignment cost subject to individual contracts and prevailing labor regulations. A wide range of legal restrictions, facility-specific staffing policies, individual preferences, and on-call requirements throughout the week are considered. The resulting model constructs shifts implicitly rather than starting with a predefined set of several shift types. To find high-quality rosters, a Branch-and-Price (B&P) algorithm is developed that uses two different branching strategies and generates new rosters as needed. The first strategy centers on the master problem variables and the second is based on the subproblem variables. Using data provided by an anesthesia department of an 1100-bed hospital as well as an extensive set of randomly generated test instances for 15 and ...

Journal ArticleDOI
TL;DR: Detailed system architecture and methodologies are first proposed, where the components include a real-time DDDAS simulation, grid modules, a web service communication server, databases, various sensors and a real system.
Abstract: Dynamic-Data-Driven Application Systems (DDDAS) is a new modeling and control paradigm which adaptively adjusts the fidelity of a simulation model. The fidelity of the simulation model is adjusted against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective date update. To this end, comprehensive system architecture and methodologies are first proposed, where the components include a real-time DDDAS simulation, grid modules, a web service communication server, databases, various sensors and a real system. Abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation are enabled through the embedded algorithms developed in this work. Grid computing is used for computational resources management and web services are used for inter-operable communications among distributed software components. The proposed DDDAS is demonstrated on an example of preventive maintenance scheduling in...

Journal ArticleDOI
TL;DR: In this paper, the authors integrate available data and physical knowledge through a Bayesian hierarchical framework with consideration of scale effects to model the nanowire growth process at any scale of interest.
Abstract: Despite significant advances in nanoscience, current physical models are unable to predict nanomanufacturing processes under uncertainties. This research work aims to model the nanowire (NW) growth process at any scale of interest. The main idea is to integrate available data and physical knowledge through a Bayesian hierarchical framework with consideration of scale effects. At each scale the NW growth model describes the time–space evolution of NWs at different sites on a substrate. The model consists of two major components: NW morphology and local variability. The morphology component represents the overall trend characterized by growth kinetics. The area-specific variability is less understood in nanophysics due to complex interactions among neighboring NWs. The local variability is therefore modeled by an intrinsic Gaussian Markov random field to separate it from the growth kinetics in the morphology component. Case studies are provided to illustrate the NW growth process model at coarse and fine sc...

Journal ArticleDOI
Chanseok Park1
TL;DR: In this paper, a closed-form Maximum Likelihood Estimator (MLE) and best unbiased estimator (BUE) are provided under a general load-sharing rule when the underlying lifetime distribution of the components in the system is exponential.
Abstract: Consider a multi-component system connected in parallel. In this system, as components fail one by one, the total load or traffic applied to the system is redistributed among the remaining surviving components, which is commonly referred to as load-sharing. This develops parameter estimation methods for these type of systems. A closed-form Maximum Likelihood Estimator (MLE) and Best Unbiased Estimator (BUE) are provided under a general load-sharing rule when the underlying lifetime distribution of the components in the system is exponential. As an extension, it is assumed that the underlying lifetime distribution of the components is Weibull and it is shown that, after the shape parameter is estimated by solving the one-dimensional log-likelihood estimating equation, the closed-form MLE and conditional BUE of the rate parameter are easily obtained. The asymptotic distribution of the proposed MLE is also provided. Illustrative examples and Monte Carlo simulation results are also presented and these substan...

Journal ArticleDOI
TL;DR: In this article, the authors considered a first-come first-served single-server system with opening and closing times, where the service durations are exponentially distributed, and the total number of arrivals is a Poisson random variable.
Abstract: This article considers a first-come first-served single-server system with opening and closing times. Service durations are exponentially distributed, and the total number of arrivals is a Poisson random variable. Naturally each customer wishes to minimize his/her waiting time. The process of choosing an arrival time is presented as a (non-cooperative) multi-player game. The overall goal of this work is to find a Nash equilibrium game strategy. It is assumed in the literature that arrivals before the opening time of the system are allowed. In this work the case where early arrivals are forbidden is studied. It turns out that unless the system is very heavily loaded, the equilibrium solution with such a restriction does not reduce the expected waiting time in a significant way. The equilibrium solution is compared with the solution which maximizes social welfare. Finally, it is show that social welfare can be increased in equilibrium by restricting arrivals to certain points of time.

Journal ArticleDOI
TL;DR: In this article, the authors developed analytical models to estimate worker blocking in narrow-aisle order picking systems and showed that the results obtained using deterministic pick times are in fact sensitive to this assumption.
Abstract: The focus of this work is to develop analytical models to estimate worker blocking in narrow-aisle order picking systems. The situation with deterministic pick times has already been reported in the literature but the case with non-deterministic pick times remains an open question. Models that use a non-deterministic pick time for cases where the pick:walk-time ratios are 1:1 and ∞:1 are presented. It is shown that the results obtained using deterministic pick times are in fact sensitive to this assumption.

Journal ArticleDOI
TL;DR: In this paper, the benefits beyond information sharing and assesses the motivation for the manufacturer (vendor) behind joining such a program are analyzed. And an analysis is presented on how the system parameters affect the profitability and determine the conditions that make the vendor-managed system a viable strategy for a manufacturer.
Abstract: Vendor-Managed Inventory (VMI) has attracted a lot of attention due to its benefits such as fewer stock-outs, higher sales, and lower inventory levels at the retailers. Vendor-Managed Availability (VMA) is an improvement that exploits the advantages beyond VMI. This article analyzes the benefits beyond information sharing and assesses the motivation for the manufacturer (vendor) behind joining such a program. It is shown that such vendor-managed systems provide increased flexibility in manufacturer's operations and may bring additional benefits. An analysis is presented on how the system parameters affect the profitability and determine the conditions that make the vendor-managed system a viable strategy for the manufacturer.

Journal ArticleDOI
TL;DR: In this paper, an effective procedure for handling two-class classification problems with highly imbalanced class sizes was developed, where each classifier is constructed from a resized training set with reduced dimension space.
Abstract: This article develops an effective procedure for handling two-class classification problems with highly imbalanced class sizes. In many imbalanced two-class problems, the majority class represents “normal” cases, while the minority class represents “abnormal” cases, detection of which is critical to decision making. When the class sizes are highly imbalanced, conventional classification methods tend to strongly favor the majority class, resulting in very low or even no detection of the minority class. The research objective of this article is to devise a systematic procedure to substantially improve the power of detecting the minority class so that the resulting procedure can help screen the original data set and select a much smaller subset for further investigation. A procedure is developed that is based on ensemble classifiers, where each classifier is constructed from a resized training set with reduced dimension space. In addition, how to find the best values of the decision variables in the proposed...

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the value of perfect monitoring information for optimal replacement of deteriorating systems in the Proportional Hazards Model (PHM) and proposed a new recursive procedure to obtain the parameters of the optimal replacement policy and the optimal average cost.
Abstract: This article investigats the value of perfect monitoring information for optimal replacement of deteriorating systems in the Proportional Hazards Model (PHM). A continuous-time Markov chain describes the condition of the system. Although the form of an optimal replacement policy for system under periodic monitoring in the PHM was developed previously, an approximation of the Markov process as constant within inspection intervals led to a counterintuitive result that less frequent monitoring could yield a replacement policy with lower average cost. This article explicitly accounts for possible state transitions between inspection epochs to remove the approximation and eliminate the cost anomaly. However, the mathematical evaluation becomes significantly more complicated. To overcome this difficulty, a new recursive procedure to obtain the parameters of the optimal replacement policy and the optimal average cost is presented. A numerical example is provided to illustrate the computational procedure and the ...

Journal ArticleDOI
TL;DR: In this paper, a new method for determining the membership functions of parameter estimates and the reliability functions of multi-parameter lifetime distributions is proposed, and a preventive maintenance policy is formulated using a fuzzy reliability framework.
Abstract: Reliability assessment is an important issue in reliability engineering. Classical reliability-estimating methods are based on precise (also called “crisp”) lifetime data. It is usually assumed that the observed lifetime data take precise real numbers. Due to the lack, inaccuracy, and fluctuation of data, some collected lifetime data may be in the form of fuzzy values. Therefore, it is necessary to characterize estimation methods along a continuum that ranges from crisp to fuzzy. Bayesian methods have proved to be very useful for small data samples. There is limited literature on Bayesian reliability estimation based on fuzzy reliability data. Most reported studies in this area deal with single-parameter lifetime distributions. This article, however, proposes a new method for determining the membership functions of parameter estimates and the reliability functions of multi-parameter lifetime distributions. Also, a preventive maintenance policy is formulated using a fuzzy reliability framework. An artifici...

Journal ArticleDOI
TL;DR: In this article, the parameter estimation method of step-stress accelerated life testing (SSALT) model is discussed by utilizing techniques of Generalized Linear Model (GLM) and iteratively weighted least squares method is implemented to obtain the maximum likelihood estimation solution.
Abstract: In this article the parameter estimation method of Step-Stress Accelerated Life Testing (SSALT) model is discussed by utilizing techniques of Generalized Linear Model (GLM). A multiple progressive SSALT with exponential failure data and right censoring is analyzed. The likelihood function of the SSALT is treated as being a censoring variate with Poisson distribution and the life-stress relationship is defined by a log link function of a GLM. Both the maximum likelihood estimation and the Bayesian estimation of GLM parameters are discussed. The iteratively weighted least squares method is implemented to obtain the maximum likelihood estimation solution. The Bayesian estimation is derived by applying Jeffreys' non-informative prior and the Markov chain Monte Carlo method. Finally, a real industrial example is presented to demonstrate these estimation methods.

Journal ArticleDOI
TL;DR: A new method for optimal sensor allocation in a DSN with the objective of timely detection of the abnormalities in a underlying physical system is proposed, which satisfies both the cost and detectability requirements.
Abstract: Massive amounts of data are generated in Distributed Sensor Networks (DSNs), posing challenges to effective and efficient detection of system abnormality through data analysis. This article proposes a new method for optimal sensor allocation in a DSN with the objective of timely detection of the abnormalities in a underlying physical system. This method involves two steps: first, a Bayesian Network (BN) is built to represent the causal relationships among the physical variables in the system; second, an integrated algorithm by combining the BN and a set-covering algorithm is developed to determine which physical variables should be sensed, in order to minimize the total sensing cost as well as satisfy a prescribed detectability requirement. Case studies are performed on a hot forming process and a large-scale cap alignment process, showing that the developed algorithm satisfies both the cost and detectability requirements.

Journal ArticleDOI
TL;DR: In this paper, a chart allocation strategy for serial parallel-multistage manufacturing processes is proposed to make rational chart allocation decisions to achieve a quicker detection capability over the whole potential fault set.
Abstract: The application of Statistical Process Control (SPC) to multistage manufacturing process has received considerable attention recently. How to effectively allocate conventional SPC charts in a serial multistage environment to monitor the process quality has not been thoroughly studied. This article adopts the approach of a linear state space model to describe multistage processes and proposes a strategy to properly allocate control charts in serial parallel-multistage manufacturing processes by considering the interrelationship information between stages. Based on the proposed chart allocation strategy it proves possible to make rational chart allocation decisions to achieve a quicker detection capability over the whole potential fault set. A hood assembly example is used to demonstrate the applications of the chart allocation strategy. Extensions are also discussed.

Journal ArticleDOI
TL;DR: Skart as mentioned in this paper is an automated sequential batch-means procedure for constructing a skewness and autoregression-adjusted confidence interval (CI) for the steady-state mean of a simulation output process either in discrete time (i.e., using observation-based statistics), or in continuous time (e.g., using time-persistent statistics).
Abstract: Skart is an automated sequential batch-means procedure for constructing a skewness- and autoregression-adjusted confidence interval (CI) for the steady-state mean of a simulation output process either in discrete time (i.e., using observation-based statistics), or in continuous time (i.e., using time-persistent statistics). Skart delivers a CI designed to satisfy user-specified requirements concerning both the CI's coverage probability and its absolute or relative precision. Skart exploits separate adjustments to the classical batch-means CI to account for the effects on the distribution of the underlying Student's t-statistic arising from skewness and autocorrelation of the batch means. The skewness adjustment is based on a Cornish–Fisher expansion for the classical batch-means t-statistic, and the autocorrelation adjustment is based on a first-order autoregressive approximation to the batch-means autocorrelation function. Skart also delivers a point estimator for the steady-state mean that is approximat...

Journal ArticleDOI
TL;DR: In this article, an approximate dynamic programming approach is used that decomposes the dynamic programming formulation by the different days in the planning horizon to construct separable approximations to the value functions.
Abstract: This article considers a quite general dynamic capacity allocation problem. There is a fixed amount of daily processing capacity. On each day, jobs of different priorities arrive randomly and a decision has to made about which jobs should be scheduled on which days. Waiting jobs incur a holding cost that is a function of their priority levels. The objective is to minimize the total expected cost over a finite planning horizon. The problem is formulated as a dynamic program, but this formulation is computationally difficult as it involves a high-dimensional state vector. To address this difficulty, an approximate dynamic programming approach is used that decomposes the dynamic programming formulation by the different days in the planning horizon to construct separable approximations to the value functions. Value function approximations are used for two purposes. First, it is shown that the value function approximations can be used to obtain a lower bound on the optimal total expected cost. Second, the valu...

Journal ArticleDOI
TL;DR: A hybrid approach based on Ant Colony Optimization and Simulated Annealing, called ACO_SA, is proposed for the design of communication networks that has the advantages of the ability to find higher performance solutions and to jump out of local minima to find better solutions.
Abstract: This article proposes a hybrid approach based on Ant Colony Optimization (ACO) and Simulated Annealing (SA), called ACO_SA, for the design of communication networks. The design problem is to find the optimal network topology for which the total cost is a minimum and the all-terminal reliability is not less than a given level of reliability. The proposed ACO_SA has the advantages of the ability to find higher performance solutions, created by the ACO, and the ability to jump out of local minima to find better solutions, created by the SA. The effectiveness of ACO_SA is investigated by comparing its results with those obtained by individual application of SA and ACO, which are basic forms of ACO_SA, two different genetic algorithms and a probabilistic solution discovery algorithm given in the literature for the design problem. Computational results show that ACO_SA has a better performance than its basic forms and the investigated heuristic approaches.