scispace - formally typeset
Search or ask a question

Showing papers on "Statistical hypothesis testing published in 1979"


Book
01 Jan 1979
TL;DR: The introductory text as mentioned in this paper provides students with a conceptual understanding of basic statistical procedures, as well as the computational skills needed to complete them, focusing on concepts critical to understanding current statistical research such as power and sample size, multiple comparison tests, multiple regression, and analysis of covariance.
Abstract: This introductory text provides students with a conceptual understanding of basic statistical procedures, as well as the computational skills needed to complete them. The clear presentation, accessible language, and step-by-step instruction make it easy for students from a variety of social science disciplines to grasp the material. The scenarios presented in chapter exercises span the curriculum, from political science to marketing, so that students make a connection between their own area of interest and the study of statistics. Unique coverage focuses on concepts critical to understanding current statistical research such as power and sample size, multiple comparison tests, multiple regression, and analysis of covariance. Additional SPSS coverage throughout the text includes computer printouts and expanded discussion of their contents in interpreting the results of sample exercises. 1. Introduction. 2. Organizing and Graphing Data. 3. Describing Distributions: Individual Scores, Central Tendency, and Variation. 4. The Normal Distribution. 5. Correlation: A Measure of Relationship. 6. Linear Regression: Prediction. 7. Sampling, Probability, and Sampling Distributions. 8. Hypothesis Testing: One-Sample Case for the Mean. 9. Estimation: One-Sample Case for the Mean. 10. Hypothesis Testing: One-Sample Case for Other Statistics. 11. Hypothesis Testing: Two-Sample Case for the Mean. 12. Hypothesis Testing: Two-Sample Case for Other Statistics. 13. Determining Power and Sample Size. 14. Hypothesis Testing, K-Sample Case: Analysis of Variance, One-Way Classification. 15. Multiple-Comparison Procedures. 16. Analysis of Variance, Two-Way Classification. 17. Linear Regression: Estimation and Hypothesis Testing. 18. Multiple Linear Regression. 19. Analysis of Covariance. 20. Other Correlation Coefficients. 21. Chi-Square (X2) Tests for Frequencies. 22. Other Nonparametric Tests.

4,010 citations


Journal ArticleDOI
TL;DR: In this article, a synthesis of Bayesian and sample-reuse approaches to the problem of high structure model selection geared to prediction is presented. But this approach is not suitable for high-dimensional models.
Abstract: This article offers a synthesis of Bayesian and sample-reuse approaches to the problem of high structure model selection geared to prediction. Similar methods are used for low structure models. Nested and nonnested paradigms are discussed and examples given.

940 citations


Journal ArticleDOI
TL;DR: In this article, a new procedure for determining the optimal number of interpretable factors to extract from a correlation matrix is introduced and compared to more conventional procedures, which evaluates the magnitude of the Very Simple Structure index of goodness of fit for factor solutions of increasing rank.
Abstract: A new procedure for determining the optimal number of interpretable factors to extract from a correlation matrix is introduced and compared to more conventional procedures. The new method evaluates the magnitude of the Very Simple Structure index of goodness of fit for factor solutions of increasing rank. The number of factors which maximizes the VSS criterion is taken as being the optimal number of factors to extract. Thirty-two artificial and two real data sets are used in order to compare this procedure with such methods as maximum likelihood, the eigenvalue greater than 1.0 rule, and comparison of the observed eigenvalues with those expected from random data.

368 citations


Book
01 Jan 1979
TL;DR: In this article, a Monte Carlo study is made of the small sample properties of various estimators of the linear regression model with first-order autocorrelated errors, and the best of the feasible estimators is iterated Prais-Winsten using a sum-of-squared-error minimizing estimate of rho.
Abstract: : A Monte Carlo study is made of the small sample properties of various estimators of the linear regression model with first-order autocorrelated errors. When independent variables are trended, estimators using T transformed observations (Prais-Winsten) are much more efficient than those using T-1 (Cochrane-Orcutt). The best of the feasible estimators is iterated Prais-Winsten using a sum-of-squared-error minimizing estimate of the autocorrelation coefficient rho. None of the feasible estimators performs well in hypothesis testing; all seriously underestimate standard errors, making estimated coefficients appear to be much more significant than they actually are. (Author)

181 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an empirically implementable technique for the analysis of noncompetitive behavior in production and provide a statistical test for the price taking behavior hypothesis which can be used to distinguish among different market structures.

165 citations


Journal ArticleDOI
TL;DR: It is concluded that better solutions to these problems, better data, more sophisticated use of economic theory, application of more rigorous diagnostic checks, and use of well-designed simulation experiments probably will produce improved macroeconometric models.
Abstract: In this article, a summary of some research bearing on the statistical analysis of econometric models is reviewed. Many estimation, testing, and prediction techniques used in econometrics have just large-sample justifications. Selected Bayesian inference results relating to econometric models are reviewed. On the problem of constructing econometric models, an approach that is a blend of traditional econometric and modern time series analysis techniques is described. Many statistical problems requiring further analysis are noted. It is concluded that better solutions to these problems, better data, more sophisticated use of economic theory, application of more rigorous diagnostic checks, including forecasting checks and use of well-designed simulation experiments, probably will produce improved macroeconometric models.

162 citations


Journal ArticleDOI
TL;DR: A method for calculating a probability of location distribution for an average individual member of a population that requires no assumptions about the shape of the distribution and which has a number of advantages over existing size indices is developed.

129 citations


Journal ArticleDOI
TL;DR: In this paper, a goodness-of-fit procedure is developed for testing whether the underlying distribution is a specified function G. The test statistic C is the one-sample limit of Efron's (1967) two-sample statistic W. The comparison are on the basis of applicability, the extent to which the censoring distribution can affect the inference, and power.
Abstract: : For right-censored data, a goodness-of-fit procedure is developed for testing whether the underlying distribution is a specified functions G. The test statistic C is the one-sample limit of Efron's (1967) two-sample statistic W. The test based on C is compared with recently proposed competitors due to Koziol and Green (1976) and Hyde (1977). The comparisons are on the basis of applicability, the extent to which the censoring distribution can affect the inference, and power. It is shown that in certain situations the C test compares favourably with the tests of Koziol-Green and Hyde.

117 citations



Journal ArticleDOI
TL;DR: This paper demonstrates an hypothesis test procedure which permits the objective and unambiguous evaluation of comparative dielectric tests on two different sets of data.
Abstract: The results of accelerated aging tests on solid electrical insulation are difficult to evaluate objectively, primarily due to the inherently large variability of the test data. This variability is often represented by the Weibull or other extreme-value probability distributions. This paper demonstrates an hypothesis test procedure which permits the objective and unambiguous evaluation of comparative dielectric tests on two different sets of data. The computation techniques are facilitated through the use of a Fortran computer program. A significant difference must be established at low probabilities of failure. Analysis of typical aging tests from the literature indicate that many experiments performed to date may not be statistically significant at utilization levels. The number of tests required to achieve unambiguous significance at low probability levels may render meaningful accelerated aging tests uneconomic.

75 citations


Journal ArticleDOI
TL;DR: A nonparametric statistical test for the analysis of flow cytometry derived histograms is presented and different sets of histograms from numerous biological systems can be compared.
Abstract: A nonparametric statistical test for the analysis of flow cytometry derived histograms is presented. The method involves smoothing and translocation of data, area normalization, channel by channel determination of the mean and S.D., and use of Bayes' theorem for unknown histogram classification. With this statistical method, different sets of histograms from numerous biological systems can be compared.

Book
30 Nov 1979
TL;DR: In this paper, the Pearson system of probability density functions and the method of least squares are used for smoothing crude data. But they do not consider the real roots of non-linear equations.
Abstract: Part I. Basic numerical techniques: 1. Introduction 2. Errors, mistakes and the arrangement of work 3. The real roots of non-linear equations 4. Simple methods for smoothing crude data 5. The area under a curve 6. Finite differences, interpolation and numerical differentiation 7. Some other numerical techniques Part II. Basic Statistical techniques: 8. Probability, statistical distributions and moments 9. The normal and related distributions 10. The common discrete distributions 10. The common discrete distributions 11. The Pearson system of probability-density functions 12. Hypothesis testing 13. Point and interval estimation 14. Some special statistical techniques Part III. The method of least squares: 15. Simple linear regression and the method of least squares 16. Curvilinear regression 17. Multiple linear regression 18. Non-linear regression.

Journal ArticleDOI
TL;DR: This article examined the relationship between two methods for detecting group effects in nonexperimental data: covariance analysis and contextual analysts and found that contextual analysis is more accurate than covariance analyses.
Abstract: This article examines the relationship between two methods for detecting group effects in nonexperimental data: covariance analysis and contextual analysts. The examination shows that contextual ef...

Journal ArticleDOI
TL;DR: In this article, a general pretheoretical framework for the study of inference is presented, based on Social Judgment Theory (SJT), which has been developed from Brunswik's probabilistic functionalism.
Abstract: This paper presents a general pretheoretical framework for the study of inference. The framework is that of Social Judgment Theory which has been developed from Brunswik's probabilistic functionalism. The first section discusses the fundamental theoretical ideas and methodological principles. Important among these is the stress on the need to study the relation between the cognitive system and the inference task using parallel concepts for describing the cognitive system and the task, the theory of cognitive tasks, and the methodology of formal representative sampling. The second section describes a series of experimental paradigms developed from the basic ideas discussed in the first section. The third, and final section gives some examples of actual empirical research, mainly research concerned with the hypothesis testing process by means of which subjects learn inference tasks, studies on cognitive skills in using various rules for making inferences, new conceptions of feedback, research on the effects of feedforward, and interpersonal learning.

Journal ArticleDOI
Kerry L. Lee1
TL;DR: In this article, a union-intersection (UI) criterion is developed that is more manageable computationally than the multivariate likelihood ratio (LR) criterion for multiresponse data.
Abstract: Criteria are considered for testing the hypothesis (in multiresponse data) that the observations are a random sample from one multi-normal population versus the alternative that, for some partition of the data, the observations arise from two multinormal populations with different means. This hypothesis is analogous to the univariate formulation of Engelman and Hartigan (1969), in which they studied a likelihood ratio test for clusters. A union-intersection (UI) criterion is developed that is more manageable computationally than the multivariate likelihood ratio (LR) criterion. The UI and LR criteria and a “linear discrimination” statistic are shown, however, to be equivalent. Some properties of the tests are provided.

Journal ArticleDOI
TL;DR: In this article, a new approach is proposed for testing goodness of fit for censored samples, where a transformation is applied so that the transformed censored sample behaves under the null hypothesis like a complete sample from the uniform (0, 1) distribution.
Abstract: A new approach is proposed for testing goodness of fit for censored samples. A transformation is applied so that the transformed censored sample behaves under the null hypothesis like a complete sample from the uniform (0,1) distribution. Any standard goodness-of-fit test can then be applied using existing tables. Empirical comparisons indicate that the proposed technique provides better overall power than other existing methods.

Journal ArticleDOI
TL;DR: In this article, a simple alternate procedure based on the familiar Fisher z was proposed, which is applicable to a wide range of problems involving tests between dependent correlations and has documented mathematical support when its power curves are examined.
Abstract: Several proposed statistics for testing the significance of the difference in two correlated r’s were first reviewed. A simple alternate procedure based on the familiar Fisher z was then suggested. This procedure, unlike its predecessors, is applicable to a wide range of problems involving tests between dependent correlations. It also has documented mathematical support when its power curves are examined. Since the procedure is asymptotic, a large sample size is required. The only other assumption is that the observations associated with the sample are drawn from a joint multivariate normal distribution.


Journal ArticleDOI
TL;DR: The power of a statistic is a sure sure of the probability that the predicted results will be observed in the sample as mentioned in this paper, when the research hypothesis is true, and the power value indicates the chance of finding similar results upon replication.
Abstract: C o h e n (1962), Brewer (1972), and Schmelkin (Note 1) report over 70 percent of published stu­ dies in American Psychological Association, AERA, and Council for Exceptional Children journals lack statistical power. This biased sample represents reports of bet­ ter studies, since countless more research reports remain unpub­ lished or published in journals of \"less\" repute. The power of a statistic is a mea­ sure of the probability, when the research hypothesis is true, that the predicted results will be observed in the sample. After the fact, power value indicates the chance of finding similar results upon replication. Apparently before the fact, most studies in education lack much chance of detecting their predicted results. So, how come after the fact so many hypotheses in educational research are sup­ ported?

Journal ArticleDOI
TL;DR: In this article, the authors investigated the interdependence of categorical variables, when the observations on these variables take place in a sequence over time, so that consecutive observations are distinctly non-independent, and standard x2 tests based on a multinomial distribution are inappropriate.
Abstract: SUMMARY The problem considered is the investigation of the interdependence of categorical variables, when the observations on these variables take place in a sequence over time, so that consecutive observations are distinctly non-independent, and standard x2 tests based on a multinomial distribution are inappropriate. Using fairly weak assumptions about the underlying probability distribution, upper and lower bounds for the appropriate x2 test statistics are found. This is deliberately done without writing down the full likelihood of the data. Although some matrix algebra is necessary for the proofs of the inequalities, the application of the results of this paper does not require mathematical expertise, and should be intelligible to anyone used to doing x2 tests. The particular problem considered here arose in an animal behaviour context, and is illustrated numerically in that context, but it is quite a common problem in behavioural data.


Journal ArticleDOI
TL;DR: Borders are given on the probability of choosing an incorrect hypothesis, and on the total probability of error, for both discrete and Continuous time parameter, and discrete and continuous state space.
Abstract: We study a statistical hypothesis testing problem, where a sample function of a Markov process with one of two sets of known parameters is observed over a finite time interval. When a log likelihood ratio test is used to discriminate between the two sets of parameters, we give bounds on the probability of choosing an incorrect hypothesis, and on the total probability of error, for both discrete and continuous time parameter, and discrete and continuous state space. The asymptotic behavior of the bounds is examined as the observation interval becomes infinite.

Journal ArticleDOI
01 Apr 1979
TL;DR: A statistical algorithm is developed to study human tracking behavior in a precognitive tracking task and a decision rule based on a statistical test of normality is used to delineate the two regions of tracking behavior.
Abstract: A statistical algorithm is developed to study human tracking behavior in a precognitive tracking task. The algorithm presented here determines the point in time when a tracking task becomes too difficult for the human to follow. Consequently, different behavior responses are observed to occur. A decision rule based on a statistical test of normality is used to delineate the two regions of tracking behavior. The proof of convergence of this algorithm to a unique solution is given. Data from a good and poor tracker are analyzed using this algorithm to illustrate how to utilize the approach presented here.

Journal ArticleDOI
TL;DR: In mastery testing, the raw agreement index and the kappa index may be estimated via one test administration when the test scores follow beta-binomial distributions as mentioned in this paper, and a computer program which facilitates the computation of the standard errors of the estimates.
Abstract: In mastery testing the raw agreement index and the kappa index may be estimated via one test administration when the test scores follow beta-binomial distributions. This paper reports formulae, tables, and a computer program which facilitate the computation of the standard errors of the estimates. Illustrative applications are provided in the form of confidence intervals, hypothesis testing, and minimum sample sizes in reliability studies for mastery tests.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed that conventional statistical tests may give misleading results when applied to biochemical data, and that distribution-free procedures are less likely to fail in this way.

ReportDOI
01 Aug 1979
TL;DR: A survey of the literature dealing with mixtures of distributions can be found in this paper, where the topics covered relate to probabilistic properties, estimation, hypotheses testing and multiple decision (selection and ranking) procedures.
Abstract: : This paper surveys some of the literature dealing with mixtures of distributions. The topics covered relate to probabilistic properties, estimation, hypotheses testing and multiple decision (selection and ranking) procedures. The results reviewed concerning probabilistic properties of mixture distributions include the identifiability, scale mixture, infinite divisibility, atomicity and perfectness. The results on estimation theory reviewed include the method of moments, method of maximum likelihood estimation, method of least squares, Bayesian estimation, and the method of curve fitting. The results for hypotheses testing provide tests for hypothesis whether an observed sample is a mixture from two samples with certain unknown proportion and also provide test if the mean of the mixture population is equal to some known value. In the last section, we give some new results for selection and ranking procedures for mixtures of distributions.


Posted ContentDOI
08 Aug 1979
TL;DR: In this article, the authors present a set of non-independent test statistics against a variety of specific alternatives, where the model under consideration is taken to be the null and the alternative is some general action.
Abstract: In order to assess the validity of the specification of an econometric model, it is useful to have a variety of diagnostic statistics which might provide the evidence on the existence and possibly the type of misspecification involved. One source of diagnostics is hypothesis tests where the model under consideration is taken to be the null and the alternative is some general action. A particularly attractive approach is to construct optimal test statistics against a variety of specific alternatives. In this way it is possible to have reasonable power against a collection of interesting alternatives, although when looking at sets of non-independent statistics, one must be cautious about interpretations of the overall size of the test.

Journal ArticleDOI
TL;DR: A statistical method for analyzing experiments on frequency-dependent fitness, suggested previously for other purposes, is superior to the statistical procedures now commonly employed and illustrated by application to published Drosophila data from differential mating success experiments.
Abstract: Experiments on frequency-dependent fitness often consist of forming pairwise mixtures of distinguishable types at several frequency combinations. These mixtures are allowed to undergo competition, after which the performance of each type is enumerated. A statistical method for analyzing such experiments is described in this article. This method, suggested previously for other purposes, is superior to the statistical procedures now commonly employed. It involves the maximum likelihood estimation of parameters for two logistic regression models: one which assumes that fitness is frequency-dependent, the other that fitness is constant over changing frequency. Estimators for both models can be calculated without difficulty using an iterative numerical algorithm implemented in a Fortran computer program available from the authors. Fitting both models allows for the construction of a likelihood ratio statistical test for whichever model is more appropriate. The method is illustrated by application to publishedDrosophila data from differential mating success experiments.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the statistical connection between Granger causality and the natural rate hypothesis and argue that these are fundamentally unrelated concepts and that finding that money does "cause" the unemployment rate does not constitute evidence against the NR hypothesis.
Abstract: In a recent paper, Sargent (1976a) proposed that the natural rate hypothesis could be tested by equating it to the statistical hypothesis that policy variables do not "cause" real variables in the sense defined by Granger (1969). Thus, if past rates of growth in the money supply do not help to predict the unemployment rate when added to a regression which includes past unemployment rates, then money does not "cause" the unemployment rate and the data would be regarded by Sargent as being consistent with the natural rate hypothesis. The purpose of this note is to examine the statistical connection between Granger causality and the natural rate hypothesis and to argue that these are fundamentally unrelated concepts. Therefore, finding that money does "cause" the unemployment rate (as some of Sargent's results suggest) does not constitute evidence against the natural rate hypothesis. It will be helpful to begin with an examination of the Granger causality relations linking an exogenous policy variable, say mt, to a goal variable, say yt. The bivariate stochastic process generating {Yt, mt} can be written in moving average form as