scispace - formally typeset
Search or ask a question

Showing papers in "The American Statistician in 1980"


Journal ArticleDOI
TL;DR: The parameter concept in the term least squares mean is defined and given the more meaningful name population marginal mean; and its estimation is discussed in this article, where the estimation of its estimation was discussed.
Abstract: The parameter concept in the term least squares mean is defined and given the more meaningful name population marginal mean; and its estimation is discussed.

1,143 citations


Journal ArticleDOI
TL;DR: This work states that exploratory data analysis is an attitude, a flexibility, and a reliance on display, NOT a bundle of techniques, and should be so taught, and confirmatory analysis is easier to teach and easier to computerize.
Abstract: We often forget how science and engineering function. Ideas come from previous exploration more often than from lightning strokes. Important questions can demand the most careful planning for confirmatory analysis. Broad general inquiries are also important. Finding the question is often more important than finding the answer. Exploratory data analysis is an attitude, a flexibility, and a reliance on display, NOT a bundle of techniques, and should be so taught. Confirmatory data analysis, by contrast, is easier to teach and easier to computerize. We need to teach both; to think about science and engineering more broadly; to be prepared to randomize and avoid multiplicity.

462 citations


Journal ArticleDOI

180 citations



Journal ArticleDOI
TL;DR: In this paper, a graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented, based on the widely used least significant difference and honestly significant difference statistics.
Abstract: A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.

113 citations


Journal ArticleDOI
TL;DR: The article indicates the radically new form and efficiency of factorial block designs, and shows the further advantages accruing to factorial arrangements through confounding, and suggests how Fisher's close collaboration with experimenters stimulated these developments.
Abstract: This article traces the development of the design of experiments from origins in the mind and professional experience of R.A. Fisher between 1922 and 1926. The article indicates how the analysis of variance procedure stimulated design, being justified by the principle of randomization that Fisher introduced with the analysis, and exploited by his use of blocking and replication. The article indicates the radically new form and efficiency of factorial block designs, shows the further advantages accruing to factorial arrangements through confounding, and suggests how Fisher's close collaboration with experimenters stimulated these developments.

103 citations


Journal ArticleDOI
TL;DR: The search for rules for effective graphical display, whether for the purpose of communication, exploration, or reconstitution, has been hampered by the lack of a cohesive body of experimental evidence regarding the parameters of graphical display.
Abstract: The search for rules for effective graphical display, whether for the purpose of communication, exploration, or reconstitution, has been hampered by the lack of a cohesive body of experimental evidence regarding the parameters of efficacious graphical display. To some extent existing evidence is diverse because of the lack of a coordinating theoretical structure and an allied unified graphical vocabulary. In this article we use the rudiments of Berlin's theory of graphics and his graphical semiology. We also present two experimental studies of Two-Variable Color Maps, which we offer as examples of how questions about the efficacy of a graphical form can be addressed.

92 citations


Journal ArticleDOI
TL;DR: Two common methods of analyzing data from a two-group pretest-posttest research design are (a) two-sample t test on the difference score between pretest and posttest and (b) repeated-measures/split-plot analysis of variance as mentioned in this paper.
Abstract: Two common methods of analyzing data from a two-group pretest-posttest research design are (a) two-sample t test on the difference score between pretest and posttest and (b) repeated-measures/split-plot analysis of variance. The repeated-measures/split-plot analysis subsumes the t test analysis, although the former requires more assumptions to be satisfied. A numerical example is given to illustrate some of the equivalences of the two methods of analysis. The investigator should choose the method of analysis based on the research objective(s).

81 citations


Journal ArticleDOI
TL;DR: In this article, a graphical technique, similar in spirit to probability plotting, is used to judge whether a Poisson model is appropriate for an observed frequency distribution, which can be applied to truncated Poisson situations.
Abstract: A graphical technique, similar in spirit to probability plotting, can be used to judge whether a Poisson model is appropriate for an observed frequency distribution. This “Poissonness plot” can equally be applied to truncated Poisson situations. It provides a type of robustness for detecting isolated discrepancies in otherwise well-behaved frequency distributions.

71 citations



Journal ArticleDOI
TL;DR: The problem of getting people to use statistics properly is discussed and the importance of using good logic in drawing inferences from statistics is emphasized.
Abstract: This article is a write-up of an invited talk given at a national meeting of the American Statistical Association. It discusses the problem of getting people to use statistics properly and emphasizes the importance of using good logic in drawing inferences from statistics. The job of a good statistical consultant is to sponsor the use of correct logic in his or her client's activities. Statisticians in general should be interested in finding more and better ways of introducing good logic at early levels of the education process.

Journal ArticleDOI
TL;DR: The question of why a geometric or coordinate-free approach to linear models has been subordinated to an algebraic approach is considered by reviewing selected papers having a geometric slant.
Abstract: The question of why a geometric or coordinate-free approach to linear models has been subordinated to an algebraic approach is considered by reviewing selected papers having a geometric slant. These begin with R.A. Fisher's 1915 paper on the distribution of the correlation coefficient and continue through William Kruskal's elegant 1975 paper on the geometry of generalized inverses. The thesis is put forward that the relative unpopularity of the geometric approach is not due to an inherent inferiority but rather to a combination of inertia, poor exposition, and a resistance to abstraction.



Journal ArticleDOI
TL;DR: Signal-extraction methods based on autoregressive integrated moving average models, improvements in X–11, revisions in preliminary seasonal factors, regression and other model-based methods, robust methods, seasonal model identification, aggregation, interrelating seasonally adjusted series, and causal approaches to seasonal adjustment are summarized.
Abstract: In recent years there have been notable advances in the methodology for analyzing seasonal time series. This paper summarizes some recent research on seasonal adjustment problems and procedures. Included are signal-extraction methods based on autoregressive integrated moving average (ARIMA) models, improvements in X–11, revisions in preliminary seasonal factors, regression and other model-based methods, robust methods, seasonal model identification, aggregation, interrelating seasonally adjusted series, and causal approaches to seasonal adjustment.

Journal ArticleDOI
TL;DR: In this article, the authors consider the use of Dn(p) instead of Sn(p), in a student's first introduction to statistical estimation, and describe the exact and asymptotic values of the average absolute error, and the appearance of its graph, in detail.
Abstract: For X with binomial (n, p) distribution the usual measure of the error of X/n as an estimator of p is its standard error Sn(p) = √{E(X/n – p)2} = √{p(1 – p)/n}. A somewhat more natural measure is the average absolute error Dn(p) = E‖X/n – p‖. This article considers use of Dn(p) instead of Sn(p) in a student's first introduction to statistical estimation. Exact and asymptotic values of Dn(p), and the appearance of its graph, are described in detail. The same is done for the Poisson distribution.

Journal ArticleDOI
TL;DR: The purpose of this paper is to examine population forecasting in terms of the considerations of administrators, technicians, and planners in order to locate areas of conflict and propose solutions.
Abstract: Some governments rely on centralized, official sets of population forecasts for planning capital facilities. But the nature of population forecasting, as well as the milieu of government forecasting in general, can lead to the creation of extrapolative forecasts not well suited to long-range planning. This report discusses these matters, and suggests that custom-made forecasts and the use of forecast guidelines and a review process stressing forecast assumption justification may be a more realistic basis for planning individual facilities than general-purpose, official forecasts.

Journal ArticleDOI
TL;DR: In this article, the authors emphasize the determination of a consistent sequence of solutions of the likelihood equations rather than maximizing the likelihood, but this consistent solution is not necessarily the maximum likelihood estimate.
Abstract: If X 1, …, X n are identically and independently distributed, then as n ŕ ∞, there exists under suitable regularity conditions a sequence of solutions of the likelihood equation that is consistent and asymptotically efficient. However, this consistent solution is not necessarily the maximum likelihood estimate. Likelihood estimation should therefore emphasize the determination of a consistent sequence of solutions of the likelihood equations rather than maximizing the likelihood. The issues are illustrated on some examples.

Book ChapterDOI
TL;DR: The Editorial Board felt that it would be most appropriate to publish the papers essentially as they were presented at the annual ASA meeting in San Diego in 1978.
Abstract: The following four papers on the teaching of statistics were presented at a session sponsored by Mu Sigma Rho and the Section on Statistical Education at the annual ASA meeting in San Diego in 1978. The four authors are well qualified to discuss the subject of teaching statistics, and the papers present much helpful material for use by teachers of statistics. Since the four papers contain many personal views and opinions, the Editorial Board felt that it would be most appropriate to publish the papers essentially as they were presented at the San Diego meeting.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss how the consultant, the consultee, and the institution fare under a number of different organizational structures and conclude that a separately funded consulting center operated as a division of a department of statistics is the most appropriate way to provide statistical consulting and to ensure quality research.
Abstract: The importance of correct implementation of statistical methodology in a wide variety of research is widely acknowledged. Few universities, however, have given thought to providing a means of ensuring that their research efforts employ statistics in an appropriate manner. Specifically, many universities have not established any formal structure for the delivery of statistical consulting services. This article addresses this problem by discussing how the consultant, the consultee, and the institution fare under a number of different organizational structures. The conclusion reached is that a separately funded consulting center operated as a division of a department of statistics is the most appropriate way to provide statistical consulting and to ensure quality research, although at many universities alternate organizational structures are probably more feasible at this time.

Journal ArticleDOI
TL;DR: In this paper, the authors show that for each linear restriction, there exists a corresponding generalized inverse that yields the same solution for the treatment effects as the linear restriction does, and that the generalized inverse can be used to approximate treatment effects.
Abstract: In experimental statistics the usual method of estimating treatment effects is to introduce arbitrary linear restrictions among the treatment effects in order to obtain solutions of the normal equations. A comparatively recent approach is to dispense with the linear restriction and use a generalized inverse solution to the normal equations. The present note is an attempt to bring these two methods closer together and show their correspondence; namely, for each given linear restriction, there exists a corresponding generalized inverse that yields the same solution for the treatment effects as the linear restriction does.


Journal ArticleDOI
TL;DR: The occurrence of missing data cells precludes a universally correct procedure for performing an analysis of variance, so the use of two computer routines to analyze a 2 × 3 factorial experiment with one missing cell is illustrated.
Abstract: The occurrence of missing data cells precludes a universally correct procedure for performing an analysis of variance. This is illustrated by the use of two computer routines to analyze a 2 × 3 factorial experiment with one missing cell. One of these routines does, however, provide information that may enhance the usefulness of the associated results.


Journal ArticleDOI
TL;DR: Shaffer's extensions and generalization of Dunnett's procedure are shown to be applicable in several nonparametric data analyses as discussed by the authors, such as the Kruskal-Wallis one-way analysis of variance (ANOVA) test for ranked data, Friedman's two-way ANOVA test for ranking data, and Cochran's test of change for dichotomous data.
Abstract: Shaffer's extensions and generalization of Dunnett's procedure are shown to be applicable in several nonparametric data analyses. Applications are considered within the context of the Kruskal-Wallis one-way analysis of variance (ANOVA) test for ranked data, Friedman's two-way ANOVA test for ranked data, and Cochran's test of change for dichotomous data.

Journal ArticleDOI
TL;DR: In this article, a two-quarter sequence in research methods and statistics for nurses, taught by a statistician, avoids some of the common problems of service courses in statistics and provides training opportunities in statistical consultation for graduate students in statistics.
Abstract: This two-quarter sequence in research methods and statistics for nurses, taught by a statistician, avoids some of the common problems of service courses in statistics and provides training opportunities in statistical consultation for graduate students in statistics. The student-evaluation methods measure the reasoning process inherent in both research methods and statistics. Formative evaluations of the sequence indicate that it is successful in maintaining interest among students and increasing confidence in their ability to critique research articles, including statistical aspects of these. The teaching and evaluation methods can be used for other groups of students from the same discipline.

Journal ArticleDOI
Mervin E. Muller1
TL;DR: A wish list of desirable statistical computing capabilities is presented, taking into account the nature of the statistical work and the choices presented by technology.
Abstract: A wish list of desirable statistical computing capabilities is presented. This may help one question which of these capabilities can be satisfied by existing packages, which might be met through reasonable extensions to these packages, which might require substantial new development, and which ought to be supplied by the computing environment rather than the packages. These questions are explored, taking into account the nature of the statistical work and the choices presented by technology. Attention is given to the barriers to be overcome if future statistical packages are to take full advantage of new technology.

Journal ArticleDOI
TL;DR: A decision-tree term project has been found to be an effective teaching device to help MBA students understand the selection of the appropriate statistical procedure for the real-world situation under analysis.
Abstract: An important step in the statistical problem-solving process is the selection of the appropriate statistical procedure for the real-world situation under analysis. A decision-tree term project has been found to be an effective teaching device to help MBA students understand this step. The project requires the students to construct a decision-tree structure, which, through a series of questions and responses, will lead from the statement of a statistical question to the appropriate sampling distribution to use in addressing the question.

Journal ArticleDOI
TL;DR: An overview of the objectives, capabilities, status, and availability of the Consistent System is given.
Abstract: The Consistent System (CS) is an interactive computer system for researchers in the behavioral and policy sciences and in fields with similar requirements for data management and statistical analysis. The researcher is not expected to be a programmer. The system offers a wide range of facilities and permits the user to combine them in novel ways. In particular, tools for statistical analysis may be used in combination with a powerful relational subsystem for data base management. This paper gives an overview of the objectives, capabilities, status, and availability of the system.

Journal ArticleDOI
TL;DR: In this article, the effects of preferential selection on linear analysis results are studied and the consequences of model misspecification are examined, as well as the effect of the preferential selection in linear models.
Abstract: Linear models are often used to quantify differentials between protected and unprotected groups on variables such as salary. Some consequences of model misspecification are examined. In addition, the effects of preferential selection on linear analysis results are studied.