scispace - formally typeset
Search or ask a question

Showing papers in "The Statistician in 1988"


Journal ArticleDOI
TL;DR: This work discusses the use of Graduating Functions, design Aspects of Variance, Bias, and Lack of Fit, and Practical Choice of a Response Surface Design in relation to Second--Order Response Surfaces.

4,363 citations



Journal ArticleDOI

838 citations


Journal ArticleDOI

793 citations


Journal ArticleDOI
TL;DR: In this article, an alternative measure, based on quantiles, is proposed, which is shown to have desirable properties: (i) the measure exists even for distributions for which no moments exist, (ii) it is not influenced by the (extreme) tails of the distribution, and (iii) the calculation is simple (and is even possible by graphical means).
Abstract: Recently, Moors (1986) showed that kurtosis is easily interpreted as a measure of dispersion around the two values u ? a. For this dispersion an alternative measure, based on quantiles, is proposed here. It is shown to have several desirable properties: (i) the measure exists even for distributions for which no moments exist, (ii) it is not influenced by the (extreme) tails of the distribution, and (iii) the calculation is simple (and is even possible by graphical means).

487 citations


Journal ArticleDOI
TL;DR: This chapter discusses the evolution of Computer-Aided Educational Delivery Systems and the role of Instructional Systems in this evolution.
Abstract: Contents: R.M. Gagn , Introduction. R.A. Reiser, Instructional Technology: A History. R.M. Gagn , R. Glaser, Foundations in Learning Research. B.H. Banathy, Instructional Systems Design. R. Kaufman, S. Thiagarajan, Identifying and Specifying Requirements for Instruction. P.F. Merrill, Job and Task Analysis. C.M. Reigeluth, R.V. Curtis, Learning Situations and Instructional Models. S. Tobias, Learner Characteristics. M.L. Fleming, Displays and Communication. G.C. Nugent, Innovations in Telecommunications. C.V. Bunderson, D.K. Inouye, The Evolution of Computer-Aided Educational Delivery Systems. R.D. Tennyson, O.C. Park, Artificial Intelligence and Computer- Based Learning. E.L. Baker, H.F. O'Neil, Jr., Assessing Instructional Outcomes. R.M. Morgan, Planning for Instructional Systems. R.K. Branson, G. Grow, Instructional Systems Development. E. Burkman, Factors Affecting Utilization.

304 citations


BookDOI
TL;DR: In this article, a game theoretic approach to the cost allocation problem by means of the?-value, the Nucleolus and the Shapley value is presented. But this approach is limited to the case of cooperative games.
Abstract: I Cooperative Games and Examples.- II Solution Concepts for Cooperative Games and Related Subjects.- III the ?-Value.- IV A Game Theoretic Approach to the Cost Allocation Problem by Means of the ?-Value, the Nucleolus and the Shapley Value.- V Convex Games and Solution Concepts.- VI Division Rules and Associated Game Theoretic Solutions for Bankruptcy Problems.- VII k-Convex Games and Solution Concepts.- References.- Author Index.

297 citations


Journal ArticleDOI
TL;DR: The Holt-Winters forecasting procedure is a variant of exponential smoothing which is simple, yet generally works well in practice, and is particularly suitable for producing short-term forecasts for sales or demand time-series data.
Abstract: The Holt-Winters forecasting procedure is a variant of exponential smoothing which is simple, yet generally works well in practice, and is particularly suitable for producing short-term forecasts for sales or demand time-series data. Some practical problems in implementing the method are discussed, including the normalisation of seasonal indices, the choice of starting values and the choice of smoothing parameters. There is an important distinction between an automatic and a nonautomatic approach to forecasting and detailed suggestions are made for implementing Holt-Winters in both ways. The question as to what underlying model, if any, is assumed by the method is also addressed. Some possible areas for future research are then outlined.

297 citations




BookDOI
TL;DR: The surface organometallic chemistry: Molecular Approaches to Surface Catalysis was discussed at the Nato Workshop on Surface Organometallic Chemistry (NWC) as discussed by the authors, where the surface of oxides at a molecular level was studied.
Abstract: Report Of The Nato Workshop: "Surface Organometallic Chemistry: Molecular Approaches To Surface Catalysis".- The Surfaces Of Oxides At A Molecular Level.- Reaction Of Organometallics With Surfaces Of Metal Oxides.- Catalytic Reactions Carried Out With Metals Derived From Clusters.- Soluble And Supported Metal Catalysts For Hydrocarbon Oxidation In Liquid And Vapor Phase.- Reactions Of Organometallic Compounds With Surfaces Of Supported And Unsupported Metals.- Low-Nuclearity Metal Clusters: Structure And Reactivity.- Large Molecular Metal Carbonyl Clusters: Models Of Metal Particles.- Molecular Models Of Early Transition Metal Oxides: Polyoxoanions As Organic Functional Groups.- Homogeneous Models For Mechanisms Of Surface Reactions: Propene Ammoxidation.- Organometallic Oxides: Future Models In Catalysis? The Example Of Trioxo(?5-Pentamethylcyclopentadienyl)Rhenium(VII).- Zeolite Synthesis: An Overview.- New Directions In Molecular Sieve Science And Technology.- Recent Advances In Pillared Clays And Group IV Metal Phosphates.- Reaction Of Organometallics With The Surfaces Of Zeolites.




Journal ArticleDOI
TL;DR: This work arranges data to convey meaning tables and graphs measures of central tendency and dispersion in frequency distributions probability and discusses sampling and sampling distributions estimation testing hypotheses chi-square and analysis of variance.
Abstract: Introduction arranging data to convey meaning tables and graphs measures of central tendency and dispersion in frequency distributions probability I - introductory ideas probability II - distributions sampling and sampling distributions estimation testing hypotheses chi-square and analysis of variance simple regression and correlation multiple regression and modelling techniques nonparametric methods time series index numbers decision theory.

Journal ArticleDOI
Robin Darton1






Journal ArticleDOI
TL;DR: In this article, the conditional variance of the error of an m-step non-linear least squares predictor is not necessarily a monotonic non-decreasing function of m. This fact has not been documented to the best of our knowledge.
Abstract: We first observe that the conditional variance of the error of an m-step non-linear leastsquares predictor is not necessarily a monotonic non-decreasing function of m. This fact has not been documented to the best of our knowledge. We have also studied methods of evaluating the conditional variance for non-linear autoregressive models and illustrated these with both real and simulated data. Bias correction is included. The facility afforded by Chapman-Kolmogorov's equation is highlighted. The possible role played by the skeleton is mentioned briefly. Moreover, the possibility of combinations of forecasts is explored, partly with a view to obtaining robust forecast against prospective influential data.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding suitable starting values for the EM algorithm in the fitting of finite mixture models to multivariate data, given that the likelihood equation often has multiple roots for mixture models, the choice of starting values requires careful consideration.
Abstract: We consider the problem of finding suitable starting values for the EM algorithm in the fitting of finite mixture models to multivariate data. Given that the likelihood equation often has multiple roots for mixture models, the choice of starting values requires careful consideration. Attention is focussed here on the use of principal components to provide suitable starting values in this context. Examples are presented which involve the clustering of two real data sets.




Journal ArticleDOI
TL;DR: In this article, an autoregressive-moving average error process is used in fitting a regression equation to the energy demands of a mechanical model of a suckler cow, where drug-induced currents in ion-channel are represented by a realisation of a stochastic compartment system.
Abstract: Three data sets are analysed to illustrate methods of modelling regression errors which are serially correlated. An autoregressive-moving average error process is used in fitting a regression equation to the energy demands of a mechanical model of a suckler cow. Drug-induced currents in ion- channels are represented by a realisation of a stochastic compartment system. First-order linear stochastic difference equations are used to model milk yield of cows. It is concluded that error models should be used with caution. situations it is assumed that the function is deficient, and it is changed. But there are cases where the assumption of independent errors is not wholly plausible. For example, some sources of error will persist over several observations when repeated measurements are made on a single experimental unit. Systematic departures may then be modelled either by another regression function, or by correlated errors. The modelling objective determines the choice: for a simple summary it may be preferable for the regression function to explain all systematic variability, whereas a correlated stochastic component may be of more assistance in understanding the data generating mechanism. A succinct summary of data is often achieved by using the regression function to describe the long-term trends and the correlations the short-term fluctua- tions. In the presence of correlated errors, ordinary least squares regression parameter estimators may be inefficient and the conventional estimators of the variances of these estimators are usually biased. The simplest way round these problems is to discard the biased standard errors; the argument being that least-squares estimation is often not very inefficient, and is intuitively appealing because of its simplicity. This approach is most useful when no estimate of precision is required, for example when data are available from independent units and within-unit variability is of little importance. (See, for example, Rowell & Walters, 1976.) Alternatively, if it can be assumed that the errors arose from a particular stochastic model, any parameters can be estimated jointly with the regression ones by maximising the likelihood. Empirical and mechanistic approaches to modelling errors will be considered in the following two sections. In essence, the mechanistic approach requires knowledge of the processes by which the data were generated, whereas the empirical method is purely data-based (Thornley, 1976, pp. 4-6).

Journal ArticleDOI
TL;DR: In this article, a natural conjugate reference informative prior for the parameters of a normal regression model is proposed, which takes into account different degrees of certainty about the different independent variables.
Abstract: We consider a normal regression model and propose a natural conjugate reference informa- tive prior for the parameters, which takes into account different degrees of certainty about the different independent variables. In many applied problems, it is very important to assess a prior density function for the parameters of the model which is as informative as possible of the real opinion of the experts. A correct assessment of the prior density in fact can sometimes support insufficiency of data. In case of a normal Multiple Regression Model (MRM) y=Xfl+u unN(O, 2I) many different procedures for assessing the prior density for ,B and o2 in the natural conjugate form have been developed, but for many of those models it is difficult to evaluate the prior covariances for the elements of 8 (Zellner, 1983). For such situations and others, Zellner (1983) proposed a procedure for assessing a g-Reference Informative Prior (g-RIP) essentially based on the following: (a) before observing y, a conceptual sample yo is assumed generated by a model a2 yo=Xfl+ uo #o-N(O, -In) g with g>O, given. (b) values fl,a and o2 for ,B and o2 anticipated by the experts are assessed. (c) Muth's rational expectation hypothesis is invoked. The Zellner's g-RIP is effective in the analysis of engineering systems (Calvi et al., 1986) or in modelling biological phenomena. In these cases, subjective knowledge can often be formalised in the assignment of parameter values fla and oa2, as anticipated by the experts, and in the indication of a degree of precision g of the conceptual sample Yo. However further information is sometimes available about the roles of the indivi- dual independent variables in the model. In order to take into account such informa- tion, we propose an extension of Zellner's approach.

Journal ArticleDOI
TL;DR: In the light of empirical evidence, this article revised Tables 2 of Moeanaddin & Tong (1988, p. 218) by adopting 12 6 as the 5% point for the case p = 1 =d, and 15*6 as the five percent point for p = 2, d = 1 or 2.
Abstract: In the light of empirical evidence, we have revised Tables 2 of Moeanaddin & Tong (1988, p. 218) by adopting 12 6 as the 5% point for the case p= 1 =d, and 15*6 as the 5% point for the case p= 2, d= 1 or 2. In each case, the threshold is again searched over the interquartile range of the data set. Other conditions of the simulation remain unchanged, except that the likelihood ratio test statistics should read