scispace - formally typeset
Search or ask a question

Showing papers in "Quality Technology and Quantitative Management in 2013"


Journal ArticleDOI
TL;DR: An overview of this literature covering some of the one-and two-chart schemes, including those that are appropriate in parameters known and unknown situations, and noting that normality is often an elusive assumption is discussed.
Abstract: In the control chart literature, a number of one-and two-chart schemes has been developed to simultaneously monitor the mean and variance parameters of normally distributed processes. These “joint” monitoring schemes are useful for situations in which special causes can result in a change in both the mean and the variance, and they allow practitioners to avoid the inflated false alarm rate which results from simply using two independent control charts (one each for mean and variance) without adjusting for multiple testing. We present an overview of this literature covering some of the one-and two-chart schemes, including those that are appropriate in parameters known (standards known) and unknown (standards unknown) situations. We also discuss some of the joint monitoring schemes for multivariate processes, autocorrelated data, and individual observations. In addition, noting that normality is often an elusive assumption, we discuss some available nonparametric schemes for jointly monitoring locat...

112 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a new efficient method to monitor the coefficient of variation (CV) by means of Run Rules (RR) type charts, which is a successful approach to Statistical Process Control when the process mean and standard deviation are not constant.
Abstract: Monitoring the coefficient of variation (CV) is a successful approach to Statistical Process Control when the process mean and standard deviation are not constant. In recent years the CV has been investigated by many researchers as the monitored statistic for several control charts. Viewed under this perspective, this paper presents a new efficient method to monitor the CV by means of Run Rules (RR) type charts. Tables are provided to show the statistical run length properties of Shewhart- y , RR2,3 -y , RR3,4 -y and RR4,5 -y control charts for several combinations of in control CV values y0 , sample size n and shift size r. Indeed, comparative studies have been performed to find the best control chart for each combination. An example illustrates the use of these charts on real data gathered from a metal sintering process.

86 citations


Journal ArticleDOI
TL;DR: In this paper, a two-phase testing procedure based on the quotient of two statistics is proposed to deal with the supplier selection problem for normally distributed processes with multiple independent characteristics based on process capability index CpU.
Abstract: In this paper, we consider the supplier selection problem for normally distributed processes with multiple independent characteristics based on process capability index CpU . A two-phase testing procedure based on the quotient of two statistics is proposed to deal with the problem. We obtain the sampling distribution and the probability density function of the quotient. Several tables of the critical values for the testing procedure and decision making are provided. The required sample sizes for various capability requirements, magnitude of difference of two suppliers, and given power are also presented. A real application on TFT-LCD manufacturing processes is presented to demonstrate the efficacy of the proposed method.

32 citations


Journal ArticleDOI
TL;DR: The t and the exponentially weighted moving average (EWMA) t charts were introduced in quality control literature for cases when users are not able to accurately estimate the process standard devia... as discussed by the authors.
Abstract: The t and the exponentially weighted moving average (EWMA) t charts were introduced in quality control literature for cases when users are not able to accurately estimate the process standard devia...

31 citations


Journal ArticleDOI
TL;DR: A new accumulative method is proposed, based on penalized likelihood estimators, that uses individual observations and is useful to detect small and persistent shifts in a process when sparsity is present.
Abstract: Multivariate control charts are essential tools in multivariate statistical process control. In real applications, when a multivariate process shifts, it occurs in either location or scale. Several...

29 citations


Journal ArticleDOI
Chin-Diew Lai1
TL;DR: This work considers various methods of constructing discrete lifetime models by discretization of continuous lifetime models, in particular, through discretizations of their hazard rates.
Abstract: We consider various methods of constructing discrete lifetime models by discretizations of continuous lifetime models, in particular, through discretizations of their hazard rates. It is well known that there are at least two possible definitions of discrete hazard rates. While this rich source of supply is indeed a good thing, it could also become an impediment when selecting the best model for a given discrete data set. An example is given for each proposed construction method. Included also is a study on discrete distributions that have bathtub shaped hazard rates.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of designing an accelerated destructive degradation test (ADDT) with a nonlinear model was considered and the optimal test plan was obtained by minimizing the asymptotic variance of the estimated 100 p th percentile of the product's lifetime distribution at the use condition.
Abstract: Degradation tests are powerful and useful tools for lifetime assessment of highly reliable products. In some applications, the degradation measurement process would destroy the physical characteristic of units when tested at higher than usual stress levels of an accelerating variable such as temperature, so that only one measurement can be made on each tested unit during the degradation testing. An accelerated degradation test giving rise to such a degradation data is called an accelerated destructive degradation test (ADDT). The specification of the size of the total sample, the frequency of destructive measurements, the number of measurements at each stress level, and other decision variables are very important to plan and conduct an ADDT efficiently. A wrong choice of these decision variables may not only result in increasing the experimental cost, but may also yield an imprecise estimate of the reliability of the product at the use condition. Motivated by a polymer data, this article deals with the problem of designing an ADDT with a nonlinear model. Under the constraint that the total experimental cost does not exceed a pre-fixed budget, the optimal test plan is obtained by minimizing the asymptotic variance of the estimated 100 p th percentile of the product's lifetime distribution at the use condition. A sensitivity analysis is also carried out to examine the effects of changes in the decision variables on the precision of the estimator of the 100 p th percentile. A simulation study further shows that the simulated values are quite close to the asymptotic values when the sample sizes are large enough.

26 citations


Journal ArticleDOI
TL;DR: In this article, a new multivariate nonparametric statistical process control chart for monitoring location parameters is proposed, which is based on integrating a directional multivariate spatial-sign test and exponentially weighted moving average control scheme to on-line sequential monitoring.
Abstract: In many applications the shift directions of observation vectors are limited, which allows focusing detection power on a limited subspace with improved sensitivity. This paper develops a new multivariate nonparametric statistical process control chart for monitoring location parameters, which is based on integrating a directional multivariate spatial-sign test and exponentially weighted moving average control scheme to on-line sequential monitoring. The computation speed of the proposed scheme is fast with a similar computation effort to its parametric counterpart, regression-adjusted control charts. It has a distribution-free property over a broad class of population models, which implies the in-control run length distribution can attain or is always very close to the nominal one when using the same control limit designed for a multivariate normal distribution. This proposed control chart possesses some other appealing features. Simulation studies show that it is efficient in detecting small or moderate shifts, when the process distribution is heavy-tailed or skewed. Finally, a specific SPC example, multistage process control, is also presented to demonstrate the effectiveness of our method.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of estimating the reliability R = P(X > Y) for a system with strength X and stress Y was considered and the best linear unbiased estimator and the modified maximum likelihood estimator based on ranked set sampling with unequal samples, when X and Y both follow exponential distributions were investigated.
Abstract: This paper considers the problem of estimating the reliability R = P(X > Y) for a system with strength X and stress Y . We present the best linear unbiased estimator and the modified maximum likelihood estimator based on ranked set sampling with unequal samples, when X and Y both follow exponential distributions. The properties of these estimators are investigated. The proposed estimators are shown to be superior to the known estimators based on simple random sampling and ranked set sampling. Finally, results of an application to a real data set are presented to illustrate some of the theoretical findings.

21 citations


Journal ArticleDOI
TL;DR: This study extends reliability evaluation to considering tolerable error rate, in which network reliability is the probability that demand can be satisfied, in a stochastic flow network (SFN).
Abstract: Service-level agreements for data transmission often define criteria such as availability, delay, and loss. Internet service providers and enterprise customers are increasingly focusing on tolerable error rate during transmission. Focusing on a stochastic flow network (SFN), this study extends reliability evaluation to considering tolerable error rate, in which network reliability is the probability that demand can be satisfied. In such an SFN, each component (branch or node) has several capacities and a transmission error rate. Network reliability can be regarded as a performance index for assessing the SFN. We propose an efficient algorithm based on minimal paths to find all minimal capacity vectors that allow the network to transmit d units of data under tolerable error rate E. Network reliability is computed in terms of such vectors by the recursive sum of disjoint products algorithm. The proposed algorithm is tested for a benchmark network and the National Science Foundation Network. The comp...

20 citations


Journal ArticleDOI
TL;DR: In this paper, the reliability analysis of a series system under the step-stress accelerated lifetime test with Type-I censoring scheme was discussed, where the components are assumed to have independent and non-identical Weibull lifetime distributions and the lifetime of each component all follows the cumulative exposure model.
Abstract: We will discuss the reliability analysis of a series system under the step-stress accelerated lifetime test with Type-I censoring scheme while the components are assumed to have independent and non-identical Weibull lifetime distributions and the lifetime of each component all follows the cumulative exposure model. In many cases, the exact component causing the failure of the system cannot be identified and the cause of failure is masked. We adopt the log-linear relationship between the stress variables and the Weibull scale parameters and apply the Bayesian analysis from masked system life data when the probability of masking is dependent on different components. Further, the reliability of the system and components are estimated under usual operating conditions. The proposed method is illustrated through a simulation study.

Journal ArticleDOI
TL;DR: A multi-state Markov repairable system with redundant dependencies, using Markov theory and aggregated stochastic process theory to model the evolution of multi- state systems with load-sharing, is introduced.
Abstract: In multi-state systems with load-sharing, the load will be redistributed among working units while some units go to failure. Hence dependencies exit among units. To model the evolution of these systems, a multi-state Markov repairable system with redundant dependencies is introduced in this paper. More than two states are allowed for each unit of the system, including perfect working, deterioration and complete failure. The state transition rate of each working unit is a function of the number of functioning units and is quantified by a redundant dependence function. A two-dimensional vector, whose elements denote respectively the number of units that are in perfect and degraded operating states, is presented to describe accurately the performance of the system. The state space of the system is divided into distinct state sets according to the value of the vector. The visit probability to a specified state set (the acceptable, excellent, operating and warning state sets), the time to the first sys...

Journal ArticleDOI
TL;DR: In this article, a comparison between the statistical properties of the Shewhart Xō chart and the T chart when the process parameters are estimated from a Phase I data set having finite size is presented.
Abstract: The estimation of the process parameters is an important issue to be fixed before starting a control chart implementation in a manufacturing process: in fact, correctly estimating the in-control mean and standard deviation of the quality parameter to be monitored can prevent from an unwanted deterioriation of the statistical performance of the control chart. Usually, the distribution parameters estimation is performed during the Phase I implementation of the control chart and is based on a finite number m of preliminary samples, each having size n, which is as large as the manufacturing environment can afford. This paper presents a comparison between the statistical properties of the Shewhart Xō chart and the Shewhart t chart when the process parameters are estimated from a Phase I data set having finite size. Mathematical expressions for the ARL and SDRL are suggested for both the charts. Tables of the design parameters selected for the two charts are provided for different numbers of Phase I sam...

Journal ArticleDOI
TL;DR: In this article, a discrete-time 1 X Geo G retrial queue with general retrial times was considered, and working vacations and vacation interruption policy was introduced into the retrial queuing.
Abstract: We consider a discrete-time 1 X Geo G retrial queue with general retrial times, and introduce working vacations and vacation interruption policy into the retrial queue. Firstly, we analyze the stationary condition for the embedded Markov chain at the departure epochs. Secondly, using supplementary variable method, we obtain the stationary probability distribution and some performance measures. Furthermore, we prove the conditional stochastic decomposition for the queue length in the orbit. Finally, some numerical examples are presented.

Journal ArticleDOI
TL;DR: An alternative approach based on the observed lifetime data is provided, which treats the failure data as the smallest extreme distribution and then model that extreme using the Peaks-over-Threshold (POT) model to estimate the lifetime quantile of interest.
Abstract: Due to cost and time consideration, it is difficult to observe all of the product’s lifetime within a reasonable time period. Hence, censored lifetime data is usually collected in real applications. Even when accelerated life tests (ALT) are used, censoring is usually inevitable. Especially for highly reliable products nowadays, the censoring proportions are more likely greater than 0.5. Such data is called highly censored data. In such cases, it is not easy to obtain a precise estimation of reliability information that is of interest, even though the maximum likelihood (ML) method is utilized. With respect to the scenario that highly censored data occurs due to time restriction (i.e., cost is not of main concern), a remedy could be to put a great number of devices into testing. This is sometimes called quantity acceleration. The main purpose of the paper is to address this issue. For the whole censored data (including failure times and the running times of unfailed items), traditional methods (in...

Journal ArticleDOI
TL;DR: In this article, a time-constrained multi-commodity multistate flow network (TMMN) is presented, where each arc employs two attributes, capacity and lead time, and the arc capacity is multi-state.
Abstract: Network structures have been widely adopted in transportation systems and supply chain systems. Delivery within the promised time frame is especially the most critical quality criterion for supply chain networks. However, little attention has been given to the performance evaluation of on-time delivery for multi-commodity networks. This paper presents a time-constrained multi-commodity multistate flow network (TMMN) which is characterized by (1) each arc employs two attributes, capacity and lead time; (2) the arc capacity is multistate; (3) different commodities consume the arc capacity differently and (4) the delivery has to be completed within the promised time frame. A new method is proposed to locate the optimal routing in a TMMN, as well as to estimate the network reliability of multi-commodity supply chains. The proposed method is targeted towards the situation where multi-commodities are conveyed through all disjointed minimal paths (MPs) in a network. This is the first study that develops a method to locate the most reliable routing in a TMMN and to estimate the network reliability as a performance index for on-time delivery of multi-commodity systems.

Journal ArticleDOI
TL;DR: The major contributions of the paper are the extensions of the known results for series systems; that is, the situations when k < n have not been covered in the previous studies and will be presented in the present paper.
Abstract: The performance of maintenance systems can be described by many indexes such as availability, mean up-time and mean down-time and so forth. The availability is the most important measure among them...

Journal ArticleDOI
TL;DR: In this article, a principal-agent model was constructed to support resource allocation, and then it was used to analyze commonly observed risk-aversion contracting between a foreign government customer (principal) and a localized 3PL logistics supplier (agent) with cooperative game combinations by fixed payment, cost-sharing incentive, as well as a performance incentive conditions.
Abstract: The partnerships of performance-based contracting (PBC) between government and original equipment manufacturer (OEM) have been well demonstrated in most studies, but very few attempts have been made at such partnerships between a foreign government (FG) and a localized third-party logistics (3PL) supplier while they operated the same system. This article constructs a principal-agent model to support resource allocation, and then uses it to analyze commonly observed risk-aversion contracting between a FG customer (principal) and a localized 3PL logistics supplier (agent) with cooperative game combinations by fixed payment, cost-sharing incentive, as well as a performance incentive conditions. Finally, a real military logistics service application in Taiwan is demonstrated by the assessment model to generate the maximum utilities while under cost-sharing incentive condition by using offset obligation between FG and OEM.

Journal ArticleDOI
TL;DR: A method is proposed that integrates the idea of frailty, which accounts for the subsampling effect, and the technique of multiple imputation for analyzing experimental data, and comprehensive comparison studies between the proposed method and existing methods are conducted.
Abstract: Many reliability life tests contain blocking or subsampling. This is often the case when treatments are directly applied to test stands rather than individual specimen. Or, when specimens come from different production batches. Incorrectly assuming completely randomized design underestimates the standard errors due to overstating the true experimental degrees of freedom. In this paper, a survey of existing approaches to analyzing reliability life tests data under subsampling is conducted. We propose a method that integrates the idea of frailty, which accounts for the subsampling effect, and the technique of multiple imputation for analyzing experimental data. A step-by-step description of the approach is presented, followed by a numerical example based on a popular reliability dataset. Finally, comprehensive comparison studies between the proposed method and existing methods are conducted.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood estimation and confidence intervals using asymptotic distribution and two parametric bootstrap resampling methods for parameter Q are explored, and a simulation study concentrating mainly on the extreme value distribution illustrates the accuracy of these confidence intervals.
Abstract: In this article, estimation of the probability () QP Y X   when X and Y are two independent but not identically location-scale distributed random variables under the joint progressively Type-II censoring is studied. The maximum likelihood estimation and confidence intervals using asymptotic distribution and two parametric bootstrap resampling methods for parameter Q are explored. A simulation study concentrating mainly on the extreme-value distribution illustrates the accuracy of these confidence intervals. Finally, a numerical example is used to illustrate the proposed methods. XX    , and n units of product B, 1 n YY   , are selected from two production lines and are then placed on a life-test simultaneously, successive failure times and the corresponding product types are recorded, and that the experiment is terminated as soon as a specified total number of required failures (say, k ) occurred. Basu (4), Johnson and Mehrotra (9), Bhattacharyya and Mehrotra (4), and Balakrishnan and Rasouli (3) have discussed nonparametric or parametric tests of hypotheses and exact likelihood inference based on joint Type-II right-censored data. Lack of flexibility to remove the units from the experiment at any time point other than the terminal point is the major drawback of such censoring schemes. So, a more general censoring scheme called joint progressively Type-II right censoring scheme is introduced recently. Joint censored data are of joint progressively Type-II right type when they are joint censored by the removal of a prespecified number of survivors whenever an individual fails; this continues until a fixed number of k failures has occurred, at which stage the remainder of the surviving individuals are also removed/censored. This scheme includes ordinary joint Type-II right censoring and complete data as special cases. Under the joint progressively Type-II right censoring scheme, Rasouli and Balakrishnan (17) studied the exact likelihood inference for two exponential populations, while Parsi et al. (14) investigated the conditional likelihood inference for two Weibull populations. L

Journal ArticleDOI
TL;DR: In this paper, the mean residual life of discrete time multi-state systems that have 1 M  states of working efficiency has been studied under the assumption that the degradation in multisystems follows a Markov process.
Abstract: The mean residual life function is an important characteristic in reliability and survival analysis. Although many papers have studied the mean residual life of binary systems, the study of this characteristic for multi-state systems is new. In this paper, we study mean residual life of discrete time multi-state systems that have 1 M  states of working efficiency. In particular, we consider two different definitions of mean residual life function and evaluate them assuming that the degradation in multi-state system follows a Markov process.

Journal ArticleDOI
TL;DR: In this paper, the variable sample sizes and variable sampling intervals multivariate exponentially weighted moving average control chart using double warning lines (MEWMA-DWL) were investigated by using the Markov chain approach.
Abstract: This study investigates the variable sample sizes and variable sampling intervals multivariate exponentially weighted moving average control chart using double warning lines (MEWMA-DWL). The steady-state average time to signal of the MEWMA-DWL control chart is obtained by using the Markov chain approach. Then the performance of the MEWMA-DWL control chart is compared with the corresponding fixed sampling rate (FSR) MEWMA control chart. From the numerical results, the MEWMA-DWL control chart is faster than the corresponding FSR MEWMA control chart in detecting small and moderate shifts in the process mean vector.

Journal ArticleDOI
TL;DR: A discrete-time batch arrival queue with geometrically working vacations and vacation interruption with embedded Markov chain technique is treated, which shows the influence of the parameters on some crucial performance characteristics of the system.
Abstract: This paper treats a discrete-time batch arrival queue with geometrically working vacations and vacation interruption. The main purpose of this paper is to present a performance analysis of this system. For this purpose, we first derive the probability generating function (PGF) of the stationary system length at the departure epochs based on the embedded Markov chain technique. Next, we present the stationary distributions of the system lengths as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give a distribution for the number of the customers at the beginning of a busy period and give a stochastic decomposition formula for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discrete-time system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance c...

Journal ArticleDOI
TL;DR: In this paper, a finite Markov chain imbedding approach was used to obtain the acceptance and rejection probabilities of two start-up demonstration tests with start-start delay, and a procedure for selecting the optimal parameter values of the new models was given.
Abstract: Start-up demonstration tests for products with start-up delay are introduced, which are fit for the studying on start-up reliability of the products with start-up delay. The probability mass function, the distribution function, the mean of the test length, the acceptance and rejection probabilities of two start-up demonstration tests with start-up delay are derived by using the finite Markov chain imbedding approach. Besides, a procedure for selecting the optimal parameter values of the new models is given. Finally, numerical results show that the new models are more efficient.

Journal ArticleDOI
TL;DR: In this paper, six new three-class attributes single sampling plans are introduced, and corresponding operating characteristic (OC) functions are constructed separately, and an important concept is introduced which is named non-conforming equivalent number and then all sampling plans can be considered as (n, ce).
Abstract: In this paper, six new three-class attributes single sampling plans are introduced, and the corresponding operating characteristic (OC) functions are constructed separately. An important concept is introduced which is named non-conforming equivalent number and then all sampling plans can be considered as (n, ce). In order to design sampling plans, the propositions of OC function are summarized and an approach is proposed to build a set of equations by using the four specified quantities: P0 , P1, α, β and important points on the p1 - p2 plane. Several principles should be followed and the number of equations should be equal to the number of parameters required in a sampling plan when using this approach. Finally, numerical examples and some discussions based on numerical computation results are given to illustrate the obtained results.

Journal ArticleDOI
TL;DR: This paper analyzes a discrete-time single server queueing system with balking and multiple working vacations, obtaining the closed form expressions for the steady-state probabilities at arbitrary and outside observer’s observation epochs.
Abstract: This paper analyzes a discrete-time single server queueing system with balking and multiple working vacations. The arriving customers balk (that is, do not join the queue) with a probability. The inter-arrival times of customers are assumed to be independent and geometrically distributed. The server works at a different rate rather than completely stopping service during vacations. The service times during a service period, service times during a vacation period and vacation times are assumed to be geometrically distributed. We obtain the closed form expressions for the steady-state probabilities at arbitrary and outside observer’s observation epochs. Computational experiences with a variety of numerical results are discussed in the form of tables and graphs. Moreover, some queueing models discussed in the literature are derived as special cases of our model.

Journal ArticleDOI
TL;DR: This paper studies and compares the equilibrium mixed strategies of customers with different decision criteria in an unobservable single-server queue with multiple vacations and finds that both the equilibrium threshold expressions of customers’ effective arrival rates based on each criterion are consistent with each other except for the Laplace thresholds.
Abstract: This paper studies and compares the equilibrium mixed strategies of customers with different decision criteria in an unobservable single-server queue with multiple vacations. Arriving customers can’t observe the queue length and the server state, whereas they are informed that the service time is exponentially distributed with unknown parameter, which is either an alternative of two values or belongs to a range. We consider three kinds of customers classified by their different waiting cost functions: linear, quadratic and exponential functions, and find that for the two types of partial information on service time parameter, both the equilibrium threshold expressions of customers’ effective arrival rates based on each criterion are consistent with each other except for the Laplace thresholds.

Journal ArticleDOI
TL;DR: In this article, the authors presented a general procedure based on the statistical distance metric and the geometric distance metric to evaluate the process yield for a manufactured product with multivariate normal/non-normal data.
Abstract: To evaluate the process yield for a manufactured product with multiple characteristics is an important task in the manufacturing industry. We present a general procedure based on the statistical distance metric and the geometric distance metric to evaluate the process yield for a manufactured product with multivariate normal/non-normal data. We select the normal distribution and the three-parameter Burr XII distribution to fit the statistical distance data and the geometric distance data, respectively. The process yield indices provide an exact measure of the overall process yield. This study uses three real examples to demonstrate the performance of the proposed approach. The results show that our procedure is an effective approach of evaluating the process yield for a manufactured product with multiple characteristics.

Journal ArticleDOI
TL;DR: The exact distribution of the ratio of two independent Birnbaum-Saunders random variables is derived in this paper, where the need to study the ratio arises from the definition of the BSS distribution itself.
Abstract: The exact distribution of the ratio of two independent Birnbaum-Saunders random variables is derived. The need to study the ratio arises from the definition of the Birnbaum-Saunders distribution itself. An application of the ratio distribution is discussed.

Journal ArticleDOI
TL;DR: A cost model is constructed and a stability condition for the system and the stationary performance measures are developed by using the matrix-analytical method to determine the optimal vacation policy and the optimal service rate of each server.
Abstract: We study a Markovian queueing system with unreliable servers and two possible groups of servers taking two different types of vacations. Once the number of idle of servers reaches one of the two critical numbers in the system, a group of servers will take a single synchronous vacation (or leave for doing a secondary job). After taking the vacation, the servers return to attend the queue (or stay idle for the next arriving customer). Therefore, there can be a maximum of two groups of servers taking two different types of vacations at a time (doing some secondary jobs). The servers are subject to failure when attending the queue. The stability condition for the system and the stationary performance measures are developed by using the matrix-analytical method. A cost model is constructed to determine the optimal vacation policy and the optimal service rate of each server. Some numerical results are presented to illustrate the optimization procedures.