scispace - formally typeset
Search or ask a question

Showing papers in "International Statistical Review in 1993"


Journal ArticleDOI
TL;DR: In this paper, the authors provide a critical survey of the literature on the use of sampling weights for analytic inference about model parameters and develop guidelines for how to incorporate the weights in the analysis.
Abstract: Summary The purpose of this paper is to provide a critical survey of the literature, directed at answering two main questions. i) Can the use of the sampling weights be justified for analytic inference about model parameters and if so, under what circumstances? ii) Can guidelines be developed for how to incorporate the weights in the analysis? The general conclusion of this study is that the weights can be used to test and protect against informative sampling designs and against misspecification of the model holding in the population. Six approaches for incorporating the weights in the inference process are considered. The first four approaches are intended to yield design consistent estimators for corresponding descriptive population quantities of the model parameters. The other two

552 citations


Journal ArticleDOI
TL;DR: In this paper, the algebraic structure of fractional factorial (FF) designs with minimum aberration is explored and an algorithm for constructing complete sets of FF designs is proposed.
Abstract: Summary Fractional factorial (FF) designs with minimum aberration are often regarded as the best designs and are commonly used in practice. There are, however, situations in which other designs can meet practical needs better. A catalogue of designs would make it easy to search for 'best' designs according to various criteria. By exploring the algebraic structure of the FF designs, we propose an algorithm for constructing complete sets of FF designs. A collection of FF designs with 16, 27, 32 and 64 runs is given.

256 citations


Journal ArticleDOI
TL;DR: A review of the basic statistical and other ideas behind accelerated testing can be found in this article, along with an overview of some current and planned statistical research to improve accelerated test planning and methods.
Abstract: Summary Accelerated tests are used to obtain timely information on the life distribution or performance over time of products. Test units are used more frequently than usual or are subjected to higher than usual levels of stress or stresses like temperature and voltage. Then the results are used to make predictions about product life or performance over time at the more moderate use or design conditions. Changes in technology, the calls for rapid product development, and the need to continuously improve product reliability have put new demands on the applications for these tests. In this paper we briefly review the basic statistical and other ideas behind accelerated testing and give an overview of some current and planned statistical research to improve accelerated test planning and methods. Today's manufacturers are facing strong pressure to develop newer, higher technology products in record time, while improving productivity, product field reliability, and overall quality. This has motivated the development of methods like concurrent engineering and encouraged wider use of designed experiments for product and process improvement efforts. The requirements for higher reliability have increased the need for more up-front testing of materials, components and systems. This is in line with the generally accepted modern quality philosophy for producing high reliability products: achieve high reliability by improving the design and manufacturing processes; move away from reliance on inspection to achieve high reliability. Estimating the time-to-failure distribution or long-term performance of components of high reliability products is particularly difficult. Most modern products are designed to operate without failure for years, tens of years, or more. Thus few units will fail or degrade importantly in a test of practical length at normal use conditions. For this reason, Accelerated Tests (ATS) are used widely in manufacturing industries, particularly to obtain timely information on the reliability of product components and materials. Generally, information from tests at high levels of stress (e.g., use rate, temperature, voltage, or pressure) is extrapolated, through a physically reasonable statistical model, to obtain estimates of life or long-term performance at lower, normal levels of stress. In some cases stress is increased or otherwise changed during the course of a test (step-stress and progressive-stress ATS). AT results are used in the reliability-design process to assess or demonstrate component and subsystem reliability, certify components, detect failure

151 citations


Journal ArticleDOI
TL;DR: In this article, a new and consistent rank test for bivariate dependence is developed, which can be obtained by removing the first Haijek projection from the quantity Sn-2 I'- XXk' I Y- YkJ.
Abstract: Summary A new and consistent rank test for bivariate dependence is developed. Let Xi' and Y' denote the (approximate) normal scores associated with the iid vectors (X, Yi), i = 1 ... , n. Then the proposed test statistic may be obtained by removing the first Haijek projection from the quantity Sn-2 I'- XXk' I Y- YkJ. Empirical characteristic function considerations are used in our development and some related graphical methods are proposed. Some difficulties that arise in extensions to dimension k > 2 are noted. A small simulation study provides evidence of the effectiveness of the new procedure.

144 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered some theoretical and graphical aspects of control charting and relations between this area and the field of engineering process control, and discussed generalizations of this technique that enable one to incorporate concomitant information in the decision making process, as well as introduce weights which not only enhance the statistical performance of a Cusum chart with respect to drifts in the process mean, but also lead to a type of chart suitable for the purpose of multivariate statistical process control.
Abstract: Summary Statistical control schemes are widely used in industry for monitoring the quality of manufactured products. These schemes provide statistically motivated criteria for detecting deviations in behavior of incoming data from some expected or desirable patterns. The present work considers some theoretical and graphical aspects of control charting and relations between this area and the field of engineering process control. The emphasis of the paper is on the Cusum technique and its relation to other statistical tools, such as the Exponentially Weighted Moving Average (EWMA). Also discussed are generalizations of this technique that enable one to incorporate concomitant information in the decision making process, as well as introduce weights which not only enhance the statistical performance of a Cusum chart with respect to drifts in the process mean, but also lead to a type of chart suitable for the purpose of multivariate statistical process control. Use of the discussed techniques is illustrated by means of several examples.

96 citations


Journal ArticleDOI
TL;DR: In this article, a new diagnostic check for detecting periodic correlation in fitted ARMA models is developed, which is recommended for routine use when fitting seasonal ARMA model, and it is shown that many seasonal economic time series also exhibit periodic correlation.
Abstract: Summary The merits of the modelling philosophy of Box & Jenkins (1970) are illustrated with a summary of our recent work on seasonal river flow forecasting. Specifically, this work demonstrates that the principle of parsimony, which has been questioned by several authors recently, is helpful in selecting the best model for forecasting seasonal river flow. Our work also demonstrates the important of model adequacy. An adequate model for seasonal river flow must incorporate seasonal periodic correlation. The usual autoregressive-moving average (ARMA) and seasonal ARMA models are not adequate in this respect for seasonal river flow time series. A new diagnostic check, for detecting periodic correlation in fitted ARMA models is developed in this paper. This diagnostic check is recommended for routine use when fitting seasonal ARMA models. It is shown that this diagnostic check indicates that many seasonal economic time series also exhibit periodic correlation. Since the standard forecasting methods are inadequate on this account, it can be concluded that in many cases, the forecasts produced are sub-optimal. Finally, a limitation of the arbitrary combination of forecasts is also illustrated. Combining forecasts from an adequate parsimonious model with an inadequate model did not improve the forecasts whereas combining the two forecasts of two inadequate models did yield an improvement in forecasting performance. These findings also support the model building philosophy of Box & Jenkins. The non-intuitive findings of Newbold & Granger (1974) and Winkler & Makridakis (1983) that the apparent arbitrary combination of forecasts from similar models will lead to forecasting performance is not supported by our case study with river flow forecasting.

60 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss some of the statistics of estimated rotations with special attention to the statistics that applies in the reconstruction of tectonic plate motions and present two models: the spherical regression model, that deals with a highly idealized form of the data, and a more realistic analysis for the types of data used in plate reconstructions.
Abstract: Summary This paper discusses some of the statistics of estimated rotations with special attention to the statistics that applies in the reconstruction of tectonic plate motions. Two models are presented: the spherical regression model, that deals with a highly idealized form of the data, and a more realistic analysis for the types of data used in plate reconstructions. The discussion of the spherical regression model focuses on the interplay of the geometry of the plate boundaries and the statistical properties of a least square estimator. Because the errors in tectonic data are extremely concentrated, the more realistic analysis reduces to a linear regression. Once this is realized, many problems of geophysical interest, such as triple junctions can be handled. The paper closes with a selection of possible problems for further consideration.

40 citations


Journal ArticleDOI
TL;DR: By employing a GIS-assisted computer-intensive sampling strategy, it is shown how it is possible to substantially reduce the costs of surveys based on area sampling whilst maintaining the same level of accuracy.
Abstract: The paper is divided into three parts. The first part reviews GIS technologies as essential background to the remainder of the paper. The second part of the paper aims to show the impact and potential of employing GIS technologies in survey processing and, in particular, in survey design. We show how, by employing a GIS-assisted computer-intensive sampling strategy, it is possible to substantially reduce the costs of surveys based on area sampling whilst maintaining the same level of accuracy. The third part of the paper aims at increasing awareness among GIS users of distortion effects induced on statistical analysis by the error propagation which occurs when GIS operations are based on two or more maps that individually contain errors. The paper considers the error properties of output maps in such circumstances.

40 citations


Journal ArticleDOI
TL;DR: This paper presents the ASPC concept and its applications to practitioners, and pre-planning for the use of ASPC in new processes is discussed.
Abstract: Algorithmic Statistical Process Control (ASPC) is an approach to quality improvement that reduces predictable quality variations using feedback and feedforward techniques, and then monitors the entire system to detect changes. As such, it is a marriage of control theory and statistical process control (SPC). Control theoretical concepts are used to minimize deviations from target by process adjustments; sec is used to gain fundamental improvements. Where applicable, ASPC is a logical next step in the drive for continuous quality improvement. This paper presents the ASPC concept and its applications to practitioners. Technical and non-technical requirements and factors conducive to the use of ASPC are emphasized, and pre-planning for the use of ASPC in new processes is discussed.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the IC industry today and how manufacturing issues cause difficulties in analysis using standard tools; control charts and acceptance sampling plans will be highlighted, and suggest modifications on how these tools can be modified to account for this non-independent behavior.
Abstract: Statistical tools which depend on the assumption of defect independence are often inappropriate in the integrated circuit (Ic) environment where this assumption is found not to hold. All standard tools and techniques must be re-examined to assess their validity in a non-independent defect environment. This paper reviews current work in the field. We discuss the ic industry today and how manufacturing issues cause difficulties in analysis using standard tools; control charts and acceptance sampling plans will be highlighted. Suggestions on how these tools can be modified to account for this non-independent behavior will be given.

23 citations


Journal ArticleDOI
TL;DR: In this paper, the underlying philosophy of modern quality improvement is seen as the mobilization of presently available sources of knowledge and knowledge gathering, often untapped including the following: (i) that the whole workforce possesses useful knowledge and creativity; (ii) every system by its operation produces information on how it can be improved; (iii) simple procedures can be learned for better monitoring and adjustment of processes; and (iv) elementary principles of experimental design be put to use that can increase the efficiency many times over of experimentation for process improvement, development, and research.
Abstract: Beginning from Bacon's famous aphorism that 'Knowledge Itself is Power', the underlying philosophy of modern quality improvement is seen as the mobilization of presently available sources of knowledge and knowledge gathering. These resources, often untapped include the following: (i) that the whole workforce possesses useful knowledge and creativity; (ii) that every system by its operation produces information on how it can be improved; (iii) that simple procedures can be learned for better monitoring and adjustment of processes; (iv) that elementary principles of experimental design be put to use that can increase the efficiency many times over of experimentation for process improvement, development, and research.

Journal ArticleDOI
TL;DR: Differences in mortality by level of urbanization and how these might be affected by differences in smoking patterns are discussed.
Abstract: This is an exploratory study of geographical factors affecting cancer mortality in the United States. Data were collected by the National Cancer Institute for the years 1950-1969 and concern mortality from cancer of the trachea bronchus and lung combined for white males. The authors discuss differences in mortality by level of urbanization and how these might be affected by differences in smoking patterns. (SUMMARY IN FRE) (ANNOTATION)

Journal ArticleDOI
TL;DR: In this article, the distributional characteristics of the residuals are considered under both the assumed sampling model and the probability model induced by the forecasting system, and the distributions of residuals under both sampling and probability models are compared.
Abstract: Summary The probability integral transform and its conditional version provide us with means of assessing the performance of statistical forecasting systems. The distributional characteristics of the residuals are considered under both the assumed sampling model and the probability model induced by the forecasting system. The CPIT can also serve as a tool to generate predictive distributions.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of estimating the failure time distributions under a random right censoring model with dependent censoring and show that under general notions of positive/negative dependence, the usual Kaplan-Meier estimator and associated confidence bands provide upper/lower bounds for the survival function of interest.
Abstract: Summary This paper deals with estimation under a random right censoring model with dependent censoring. This arises, for example, in competing-risks problems in reliability and medical studies where a system is made up of several components, and it fails as soon as one of the components fails. The failure times of the other components are then randomly right-censored. Based on the censored data, one has to make inferences about the failure time distributions of the individual components. Under independent (or more generally non-informative) censoring, the Kaplan-Meier estimator consistently estimates the failure time distributions. When censoring is dependent, this is no longer true. In competing-risks models where the components are all subject to the same environment and stress conditions, the censoring mechanisms are likely to be (positively) dependent. In this paper, we consider nonparametric empirical bounds for the failure time distributions under dependent censoring. In particular, we show that under general notions of positive/negative dependence, the usual Kaplan-Meier estimator and associated confidence bands provide upper/lower bounds for the survival function of interest. These are considerably better than the worst-case bounds. A real data set is used to motivate the problem and illustrate the various procedures.

Journal ArticleDOI
TL;DR: The cumulative chi-squared statistic and its maximal component are proposed as nonparametric tests for analyzing clinical trials with special features that require other methods of analysis besides the usual analysis of variance techniques.
Abstract: Clinical trials often have special features that require other methods of analysis besides the usual analysis of variance techniques. For example, data are usually not normally distributed and frequently are in the form of rank data or ordered categorical data. The cumulative chi-squared statistic and its maximal component are proposed as nonparametric tests for analyzing such data. They offer not only robustness of validity but also that of efficiency. These two statistics are useful generally for modelling and analyzing data in situations where there is an ordering in the parameters. The cumulative chi-squared statistic is applied to the profile analysis of repeated measures that require taking the natural ordering along the time axis into account. The maximal component statistic is applied to a dose finding experiment where a particular multiple comparison procedure is required for ordered parameters corresponding to dose levels. Other problems addressed in the paper include various kinds of multiplicity problems and the proving equivalence of a test drug to the standard which require a quite different approach from the usual significance tests.

Journal ArticleDOI
TL;DR: In this article, a generalization of the concept of vector correlation proposed by Escoufier (1973) to the context of time series is presented, and a sample analogue Afi(k) is introduced and its asymptotic distribution is derived for a wide class of stationary time series.
Abstract: Summary This paper presents a generalization of the concept of vector correlation proposed by Escoufier (1973) to the context of time series. For two jointly stationary multivariate stochastic processes {X,) and {Y,} respectively, we define a coefficient of vector cross-correlation at lag k, denoted by Ax,(k), and we describe its main properties. A sample analogue Afi(k) is also introduced and its asymptotic distribution is derived for a wide class of stationary time series. For Y, X,, Axx(k) is a coefficient of vector autocorrelation and the A~x(k)'s can be used, in particular, to test the hypothesis of white noise. First, we describe a test for white noise against serial dependence at each lag k and secondly we define a global test against serial dependence at several lags (say k = 1, .... M). A procedure for checking the independence of two jointly stationary multivariate time series is also presented.

Journal ArticleDOI
TL;DR: In this article, a set of activities, known as a measurement assurance program, which cut across three disciplines: quality management, metrology, and statistics, is described and the role of statistics in such a program is discussed.
Abstract: Measurement assurance is a structured process designed to ensure that measurements are adequate for their intended use. The assurance is achieved by implementing a set of activities, known as a measurement assurance program, which cut across three disciplines: quality management, metrology, and statistics. This paper briefly describes these activities and focuses on the role of statistics in such a program. It identifies the various statistical techniques that contribute to measurement assurance and points to international statistical standards, both published and in-progress, that support these techniques. Finally, the paper identifies important areas where statistical contributions are needed given the sophistication of modern measurement technology.

Journal ArticleDOI
TL;DR: In this paper, the authors present universal identities for the probability of occurrence of at least t, or exactly t, out of n arbitrary events, and show how these relationships are consequences of two elementary results: P(AB) = P(A) -P(AB); and P(E) > P(F) if E D F
Abstract: Summary We present universal identities for the probability of occurrence of at least t, or exactly t, out of n arbitrary events. Well-known, as well as new, bounds can be derived simply from these identities. The approach is probabilistic and provides a natural and unified approach to these inequalities. It also has the advantage of showing how these relationships are consequences of two elementary results: P(AB) = P(A) - P(AB); and P(E) > P(F) if E D F.

Journal ArticleDOI
TL;DR: In this article, a way to deal with robustness in hypotheses testing using a tail-ordering on distributions is described, under mild conditions that to test H,: 0 0ot, at level a < 0.5, the uniformly most powerful (UMP) test that accepts Ho when X has distribution function F(X,,- F) would also accept this with X A-G if F<, G.
Abstract: Summary A way to deal with robustness in hypotheses testing using a tail-ordering on distributions is described. We prove, under mild conditions that to test H,: 0 0ot, at level a < 0.5, the uniformly most powerful (UMP) test that accepts Ho when X has distribution function F(X ,,- F) would also accept this with X A- G if F<, G. Likewise, the UMP test that rejects HI with X - F would also reject it with X - D if D <, F, where <, is the tail-ordering defined by Loh (1984), and where F, G and D belong to the class of distributions J* defined below. For distributions of this class we define the r-value as a measure of test robustness against changes in the model distribution. We also make an analysis of test robustness using the asymptotic distribution of the random variable p-value.

Journal ArticleDOI
TL;DR: In this paper, the authors consider different aspects of variation and discuss two strategies for reducing variation caused by an identified source, which involve controlling the source of the variation, or finding a special type of interaction between the source and a process control factor.
Abstract: Summary This paper considers different aspects of variation and discusses two strategies for reducing variation caused by an identified source. The strategies involve controlling the source of the variation, or finding a special type of interaction between the source and a process control factor. The latter approach is further explained by examples using either traditional designed experiments or the product arrays proposed by Taguchi.

Journal ArticleDOI
TL;DR: In this article, the authors discuss general issues pertinent to the accompanying papers: the nature of the study design used; the time dimension of the exposure and the spatial variation in exposure.
Abstract: Summary Interest in industrial epidemiology has grown in recent years with increasing awareness of the need to understand the impact of industrial processes in the health of the workforce or on the population at large. In this short introduction, we discuss general issues pertinent to the accompanying papers: the nature of the study design used; the time dimension of the exposure and the spatial variation in exposure. We also elaborate on the epidemiological implications of measurement error in the evaluation of exposure levels, a problem which commonly occurs in industrial epidemiology.

Journal ArticleDOI
TL;DR: In this paper, the authors present a probabilistic model for the processus stochasticization of an image, which is a processus which is defined as a superposition of the image originale (vraie) of a given image.
Abstract: Il est frequent que l'information recueillie lors d'une experience prenne la forme d'une image. Par exemple, dans l'6tude des mat6riaux ou des sols on peut observer des structures trbs fines, en sciences m6dicales on a acces a des images de tissus ou d'organes, ou encore, a une autre 6chelle, des satellites peuvent fournir des vues trbs d6taill6es de grandes parties du sol. Depuis peu des statisticiens manifestent un int6ret pour l'analyse d'images, par exemple Switzer (1983), Baddeley (1984), Besag (1986) et Ripley (1986). Il est souvent appropri6 de consid6rer une image observ6e comme la r6alisation, ou une partie de la r6alisation, d'un processus al6atoire, cela pouvant t un bruit de nature al6atoire s'est superpos6 ' l'image originale (vraie). Ou encore, lorsque l'on observe une structure cristalline sur une grande r6gion et que celle-ci est divis6e en sous regions, on n'a pas n6cessairement le meme schema sur chacune de ces dernieres; cela est analogue a ce qui se passe lorsque l'on divise la trajectoire d'une s6rie chronologique en segments. Bien que la classe des processus stochastiques g6ndrateurs d'images ne soit pas encore tr&s riche, un certain nombre de tels processus sont assez bien connus. Ces processus peuvent avoir un caractbre tres g6om6trique, par exemple ceux bas6s sur une partition du plan (Stoyan, Kendall & Mecke, 1987, chapitre 10), ou un caractbre plus probabiliste, par exemple ceux bas6s sur les champs markoviens (Besag, 1974, 1986). Ahuja & Schachter (1982) donnent des informations i propos de plusieurs de ces processus. II peut &tre important de d6terminer le processus g6n6rateur d'une image i partir d'une observation de celle-ci. Si cet objectif ambitieux ne peut etre atteint avec certitude, on aimerait au moins pouvoir dire si une image observ6e peut raisonnablement etre consid6r6e comme ayant 6tC g6n6r6e par un processus donn6. C'est de ce problbme dont nous voulons traiter dans cet article, cela dans le cas d'images ayant deux 6tats ou

Journal ArticleDOI
TL;DR: In this article, the authors describe a graphical display for continuous manufacturing process data and discuss the impact their display has had on the manufacturing process and briefly describe a system they have developed to display such data animations.
Abstract: Summary Dynamic graphical methods can be usefully applied to the analysis of continuous manufacturing process data. This paper describes some data collected during an ongoing project which addresses a particular manufacturing problem and describes a method developed to display the data. The nature of the manufacturing problem and the computational requirements of a dynamic graphical display prevented the more common interactive approach. We describe a graphical display we developed for these data and discuss the impact our display has had on the manufacturing process. In an appendix we briefly describe a system we have developed to display such data animations.

Journal ArticleDOI
TL;DR: The finding of a confirmed childhood leukaemia excess in the village is statistically related to father's external radiation exposure during work at Sellafield before his child's conception and clearly needs more attention and detail to understand whether it is truly radiation related and, if so, quite how it might operate.
Abstract: Summary At this stage the perspective has changed from that of a suggested childhood leukaemia excess in Seascale which was possibly related to environmental radiation contamination (as originally reported by the Yorkshire television programme) to a confirmed childhood leukaemia excess in the village which is statistically related to father's external radiation exposure during work at Sellafield before his child's conception. The evidence has been developed from anecdote by the sequential use of standard statistical and epidemiological methods-geographical, cohort and case-control studies. This finding clearly needs and is receiving more attention and detail to try to understand whether it is truly radiation related and, if so, quite how it might operate.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the design, analysis and interpretation of the long term mortality studies of coke oven workers conducted by the Department of Biostatistics in the Graduate School of Public Health, University of Pittsburgh.
Abstract: Summary This paper describes the design, analysis and interpretation of the long term mortality studies of coke oven workers conducted by the Department of Biostatistics in the Graduate School of Public Health, University of Pittsburgh. Problems related to the uncertainties associated with occupational exposure histories, industrial exposure measurements and their impact on risk estimates are discussed. Methodologies to combat these difficulties using ancillary risk factor information are suggested. The coke oven workers major cohorts consist of 1060 men employed at the coke ovens in 1953 in two major coke plants in Allegheny County, Pennsylvania and 1860 men who worked between 1951 and 1955 at the coke ovens in ten coke plants from other areas of the United States and Canada. The major comparison groups consist of 2065 men for the Allegheny County cohort and 3579 men for the Non-Allegheny County cohort, who worked in the corresponding steel plants during the same periods but never in coke oven jobs. The cohorts have been followed for mortality from 1953 through 1982. Cause-specific mortality rates within different subgroups based primarily on length of employment and type of jobs are evaluated for various phases of follow-up using summary statistics such as standardized mortality ratios and relative odds ratios. Major findings indicate a strong relationship for increased respiratory cancer risks associated with longer duration and proximity to the coke ovens and an increased risk of prostate cancer among coke oven workers. Dose-response relationships between lung cancer risks and a measure of coke oven emissions, the benzene soluble fraction of the total particulate matter, commonly known as coal tar pitch volatile (CTPV), are investigated using both regression and biologically motivated models. Results from the regression models indicate that measures of exposure rates, duration of exposure and time since exposure terminated are important predictors of cancer risks. Results from the fitting of multistage and two-mutation models indicate both initiation and promotional effects of coke oven effluent on lung cancer.

Journal ArticleDOI
TL;DR: In this paper, a general quality selection model is derived from a study of a Swedish pulp mill, where the customer's quality requirements are given by a capability index, which includes tolerance limits and means.
Abstract: A quality selection model consists of three parts; economy (prices and costs), production (variability, dependency and distribution of the quality characteristic) and the customer's quality requirements. The main use of quality selection models has earlier been to determine the process level(s) of the quality characteristic(s) which maximize(s) the expected profit. Recently, the interest has been more focused on sensitivity analysis mainly of the economic impact of changes in variability, and such a study could be an early part of a quality improvement program. In this paper, a general quality selection model is derived from a study of a Swedish pulp mill. The customer's quality requirements are assumed to be given by a capability index, which includes tolerance limits and means. The number of quality classes are allowed to be more than two and the production cost function is assumed to be exponential (including a linear cost function). When working with a non-linear cost function, two different cases have to be considered-the output case and the input case. The first (second) case covers production processes where the production cost depends on the real (intended) outcome of the production process. The equation for the optimal process level is derived and an explicit approximation of the optimal process level is given. The economic impact of changes in process variability is demonstrated in the case study and emphasized throughout the discussions. It is also shown by an example that a classification into more than two classes has a negligible effect on the optimal process level and on the optimal expected profit. Finally, it is shown that an exponential production cost function can be approximated with a properly chosen linear production cost function.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood estimators for the two marginal parameters and the covariance parameter were derived, and the asymptotic variances of the estimators were also derived.
Abstract: Summary When the observations on a bivariate Poisson population are classified as either no occurrence or at least one occurrence, the resulting sample is in the form of a 2 X 2 table. In this case, the present paper gives maximum likelihood estimators for the two marginal parameters and the covariance parameter. The asymptotic variances of the estimators are also derived. For comparison with other estimators, two illustrative examples are given. It is concluded that the estimators of the present paper, which are obtained by simple direct calculations, produce fairly accurate results, especially for small values of the marginal parameters.

Journal ArticleDOI
TL;DR: In this article, the authors discuss statistical inference for response models in which the dependent variable is subjected to a nonlinear parametric transformation, and discuss methods of achieving the efficiency bound of the instrumental variables estimates.
Abstract: Summary The paper discusses statistical inference for response models in which the dependent variable is subjected to a nonlinear parametric transformation. Maximum likelihood estimation based on normally distributed errors may be logically flawed, and in any case maximum likelihood estimates are generally inconsistent when the error distribution-be it normal or non-normal--is misspecified. We discuss also a semiparametric approach, based on a class of instrumental variables estimates which always lose efficiency relative to correctly specified maximum likelihood, but are consistent in the presence of a wide class of error distributions. We discuss methods of achieving the efficiency bound of the instrumental variables estimates.

Journal ArticleDOI
TL;DR: In this article, the authors describe the course of the disaster caused by a major leakage of toxic gas at Bhopal City, Madhya Pradesh, India, and consider the statistical problems associated with the subsequent investigation of this disaster, from the point of view of both acute and chronic morbidity and mortality.
Abstract: Summary The paper describes the course of the disaster caused by a major leakage of toxic gas at Bhopal City, Madhya Pradesh, India. It then considers the statistical problems associated with the subsequent investigation of the disaster, from the point of view of both acute and chronic morbidity and mortality.

Journal ArticleDOI
TL;DR: In this article, weighted least squares estimators of the parameters of the model are obtained with the help of a condition that the cell frequencies are proportional to the corresponding cell variances.
Abstract: Summary For factorial designs with unequal cell variances, weighted least squares estimators of the parameters of the model are obtained with the help of a condition that the cell frequencies are proportional to the corresponding cell variances. This proportionality condition also produces balance of the design so that the expectations of mean squares can be obtained and tests of significance performed easily. When cell variances are not known, the sample for each cell is drawn in two stages. The cell samples at the first stage provide independent estimators of cell variances. The size of the final sample in each cell is then determined by the proportionality condition using estimated variances. The same analysis is then carried out for final sample along with some adjustment for bias wherever necessary. The method is applicable to nested and other designs.