scispace - formally typeset
Search or ask a question

Showing papers on "Statistical hypothesis testing published in 1997"


Book
07 Apr 1997
TL;DR: In this paper, the authors present a foundation for robust regression methods for estimating measures of location and scale, including confidence intervals in the one-sample case, and the correlation and tests of independence.
Abstract: Preface 1 Introduction 2 A Foundation for Robust Methods 3 Estimating Measures of Location and Scale 4 Confidence Intervals in the One-Sample Case 5 Comparing Two Groups 6 Some Multivariate Methods 7 One-Way and Higher Designs for Independent Groups 8 Comparing Multiple Dependent Groups 9 Correlation and Tests of Independence 10 Robust Regression 11 More Regression Methods

1,836 citations


Book
27 Jan 1997
TL;DR: Quantitative Data Analysis with SPSS for Windows explains statistical tests using the latest version of SPSs, the most widely used computer package for analyzing quantitative data, using the same formula-free, non-technical approach.
Abstract: From the Publisher: Quantitative Data Analysis with SPSS for Windows explains statistical tests using the latest version of SPSS, the most widely used computer package for analyzing quantitative data. Using the same formula-free, non-technical approach as the highly successful non-windows version, it assumes no previous familiarity with either statistics or computing, and takes the reader step-by-step through each of the techniques for which SPSS for Windows can be used. The book also contains exercises with answers, and covers issues such as sampling, statistical significance, and the selection of appropriate tests.

1,056 citations


Book
01 Jun 1997
TL;DR: The first principle of the Law of Likelihood as discussed by the authors is that the strength of evidence is limited by the expectation of the researcher's expectation, and the importance of the evidence is determined by the test of significance.
Abstract: The First Principle Introduction The Law of Likelihood Three Questions Towards Verification Relativity of Evidence Strength of Evidence Counterexamples Testing Simple Hypotheses Composite Hypotheses Another Counterexample Irrelevance of the Sample Space The Likelihood Principle Evidence and Uncertainty Summary Exercises Neyman-Pearson Theory Introduction Neyman-Pearson Statistical Theory Evidential Interpretation of Results of Neyman-Pearson Decision Procedures Neyman-Pearson Hypothesis Testing in Planning Experiments: Choosing the Sample Size Summary Exercises Fisherian Theory Introduction A Method for Measuring Statistical Evidence: The Test of Significance The Rationale for Significance Tests Troubles with p-Values Rejection Trials A Sample of Interpretations The Illogic of Rejection Trials Confidence Sets from Rejection Trials Alternative Hypothesis in Science Summary Paradigms for Statistics Introduction Three Paradigms An Alternative Paradigm Probabilities of Weak and Misleading Evidence: Normal Distribution Mean Understanding the Likelihood Paradigm Evidence about a Probability: Planning a Clinical Trial and Interpreting the Results Summary Exercises Resolving the Old Paradoxes Introduction Why is Power of Only 0.80 OK? Peeking at Data Repeated Tests Testing More than One Hypothesis What's Wrong with One-SIded Tests? Must the Significance Level be Predetermined? And is the Strength of Evidence Limited by the Researcher's Expectations? Summary Looking at Likelihoods Introduction Evidence about Hazard Rates in Two Factories Evidence about an Odds Ration A Standardized Mortality Rate Evidence about a Finite Population Total Determinants of Plans to Attend College Evidence about the Probabilities in a 2x2x2x2 Table Evidence from a Community Intervention Study of Hypertension Effects of Sugars on Growth of Pea Sections: Analysis of Variance Summary Exercises Nuisance Parameters Introduction Orthogonal Parameters Marginal Likelihoods Conditional Likelihoods Estimated Likelihoods Profile Likelihoods Synthetic Conditional Likelihoods Summary Exercises Bayesian Statistical Inference Introduction Bayesian Statistical Models Subjectivity in Bayesian Models The Trouble with Bayesian Statistics Are Likelihood Methods Bayesian? Objective Bayesian Inference Bayesian Integrated Likelihoods Summary Appendix: The Paradox of the Ravens

880 citations


Journal ArticleDOI
TL;DR: Temporal autocorrelation, spatial coherency, and their effects on voxel-wise parametric statistics were examined in BOLD fMRI null-hypothesis, or "noise," datasets.

687 citations


Posted Content
TL;DR: In this article, the authors discuss the event study methodology, including hypothesis testing, the use of different benchmarks for the normal rate of return, the power of the methodology in different applications and the modeling of abnormal returns as coefficients in a (multivariate) regression framework.
Abstract: This paper discusses the event study methodology, beginning with FFJR (1969), including hypothesis testing, the use of different benchmarks for the normal rate of return, the power of the methodology in different applications and the modeling of abnormal returns as coefficients in a (multivariate) regression framework. It also focuses on frequently encountered statistical problems in event studies and their solutions.

683 citations


Dissertation
01 Jan 1997
TL;DR: It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time.
Abstract: This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new non-parametric Bayesian learning methods relying on Gaussian process priors over functions are developed. These priors are controlled by hyperparameters which set the characteristic length scale for each input dimension. In the simplest method, these parameters are fit from the data using optimization. In the second, fully Bayesian method, a Markov chain Monte Carlo technique is used to integrate over the hyperparameters. One advantage of these Gaussian process methods is that the priors and hyperparameters of the trained models are easy to interpret. The Gaussian process methods are benchmarked against several other methods, on regression tasks using both real data and data generated from realistic simulations. The experiments show that small datasets are unsuitable for benchmarking purposes because the uncertainties in performance measurements are large. A second set of experiments provide strong evidence that the bagging procedure is advantageous for the Multivariate Adaptive Regression Splines (MARS) method. The simulated datasets have controlled characteristics which make them useful for understanding the relationship between properties of the dataset and the performance of different methods. The dependency of the performance on available computation time is also investigated. It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time. The Gaussian process methods are shown to consistently outperform the more conventional methods.

467 citations


Book
01 Jun 1997
TL;DR: Basic Principles of Population Genetics * Counting Methods and the EM Algorithm * Newton's Method and Scoring * Hypothesis Testing and Categorical Data
Abstract: Basic Principles of Population Genetics * Counting Methods and the EM Algorithm * Newton's Method and Scoring * Hypothesis Testing and Categorical Data * Genetic Identity Coefficients * Applications of Identity Coefficients * Computation of Mendelian Likelihoods * The Polygenic Model * Descent Graph Models * Molecular Phylogeny * Radiation Hybrid Mapping * Models of Recombination * Sequence Analysis * Poisson Approximation * Diffusion Processes

389 citations


Journal ArticleDOI
TL;DR: The structural components method is extended to the estimation of the Receiver Operating Characteristics (ROC) curve area for clustered data, incorporating the concepts of design effect and effective sample size used by Rao and Scott (1992, Biometrics 48, 577-585) for clustered binary data.
Abstract: Current methods for estimating the accuracy of diagnostic tests require independence of the test results in the sample. However, cases in which there are multiple test results from the same patient are quite common. In such cases, estimation and inference of the accuracy of diagnostic tests must account for intracluster correlation. In the present paper, the structural components method of DeLong, DeLong, and Clarke-Pearson (1988, Biometrics 44, 837-844) is extended to the estimation of the Receiver Operating Characteristics (ROC) curve area for clustered data, incorporating the concepts of design effect and effective sample size used by Rao and Scott (1992, Biometrics 48, 577-585) for clustered binary data. Results of a Monte Carlo simulation study indicate that the size of statistical tests that assume independence is inflated in the presence of intracluster correlation. The proposed method, on the other hand, appropriately handles a wide variety of intracluster correlations, e.g., correlations between true disease statuses and between test results. In addition, the method can be applied to both continuous and ordinal test results. A strategy for estimating sample size requirements for future studies using clustered data is discussed.

376 citations


Journal ArticleDOI
TL;DR: In this paper, the authors suggest that confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, and that minimum biologically significant effect sizes be used for all power analyses, and if retrospective power estimates are to be reported, then the α-level, effect sizes, and sample sizes used in calculations must also be reported.
Abstract: Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be ≤0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to accept a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the α-level, effect sizes, and sample sizes used in calculations must also be reported.

363 citations


Journal ArticleDOI
TL;DR: The authors showed that for multivariate distributions exhibiting a type of positive dependence that arise in many multiple-hypothesis testing situations, the Simes method indeed controls the probability of type I error.
Abstract: The Simes method for testing intersection of more than two hypotheses is known to control the probability of type I error only when the underlying test statistics are independent. Although this method is more powerful than the classical Bonferroni method, it is not known whether it is conservative when the test statistics are dependent. This article proves that for multivariate distributions exhibiting a type of positive dependence that arise in many multiple-hypothesis testing situations, the Simes method indeed controls the probability of type I error. This extends some results established very recently in the special case of two hypotheses.

319 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the construction of resampling tests for differences of means that account simultaneously for temporal and spatial correlation, using the relatively new concept of moving blocks.
Abstract: Presently employed hypothesis tests for multivariate geophysical data (e.g., climatic fields) require the assumption that either the data are serially uncorrelated, or spatially uncorrelated, or both. Good methods have been developed to deal with temporal correlation, but generalization of these methods to multivariate problems involving spatial correlation has been problematic, particularly when (as is often the case) sample sizes are small relative to the dimension of the data vectors. Spatial correlation has been handled successfully by resampling methods when the temporal correlation can be neglected, at least according to the null hypothesis. This paper describes the construction of resampling tests for differences of means that account simultaneously for temporal and spatial correlation. First, univariate tests are derived that respect temporal correlation in the data, using the relatively new concept of “moving blocks” bootstrap resampling. These tests perform accurately for small samples ...

Journal ArticleDOI
TL;DR: In this paper, a multiplicity of approaches and procedures for multiple testing problems with weights are discussed, for both the intersection hypothesis testing and the multiple hypotheses testing problems, and an optimal per family weighted error-rate controlling procedure is obtained.
Abstract: In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights Some rationale for incorporating weights in multiple hypotheses testing are discussed Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained This model serves as a vehicle for demonstrating the different implications of the approaches to weighting Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate Extensions and modifications of the procedures based on Simes (1986) are given These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights

Journal ArticleDOI
TL;DR: This article derived approximate analytic expressions for the biases under a simple first-order autoregressive data generating process for the short rate, and then conduct Monte Carlo experiments based on a bias-adjusted firstorder auto-regression process for short rate and for a more realistic bias adjusted VAR-GARCH model incorporating the short rates and three term spreads.

Journal ArticleDOI
TL;DR: In this article, the rank of a matrix a π - ξ is estimated based on an asymptotically normal estimate of π and some identifiable specification for ξ.

Journal ArticleDOI
TL;DR: In this paper, a small number of simple problems, such as estimating the mean of a normal distribution or the slope in a regression equation, are covered, and some key techniques are presented.
Abstract: This paper is concerned with methods of sample size determination. The approach is to cover a small number of simple problems, such as estimating the mean of a normal distribution or the slope in a regression equation, and to present some key techniques. The methods covered are in two groups: frequentist and Bayesian. Frequentist methods specify a null and alternative hypothesis for the parameter of interest and then find the sample size by controlling both size and power. These methods often need to use prior information but cannot allow for the uncertainty that is associated with it. By contrast, the Bayesian approach offers a wide variety of techniques, all of which offer the ability to deal with uncertainty associated with prior information.

Journal ArticleDOI
TL;DR: In this article, a variety of flexible specification, fixed specilication, linear, and nonlinear econometric models are used to forecast nine macroeconomic variables in a real-time scenario.

Journal ArticleDOI
TL;DR: In this article, a survey article that attempts to synthetize a broad variety of work on wavelets in statistics and includes some recent developments in nonparametric curve estimation that have been omitted from review articles and books on the subject is presented.
Abstract: The field of nonparametric function estimation has broadened its appeal in recent years with an array of new tools for statistical analysis. In particular, theoretical and applied research on the field of wavelets has had noticeable influence on statistical topics such as nonparametric regression, nonparametric density estimation, nonparametric discrimination and many other related topics. This is a survey article that attempts to synthetize a broad variety of work on wavelets in statistics and includes some recent developments in nonparametric curve estimation that have been omitted from review articles and books on the subject. After a short introduction to wavelet theory, wavelets are treated in the familiar context of estimation of «smooth» functions. Both «linear» and «nonlinear» wavelet estimation methods are discussed and cross-validation methods for choosing the smoothing parameters are addressed. Finally, some areas of related research are mentioned, such as hypothesis testing, model selection, hazard rate estimation for censored data, and nonparametric change-point problems. The closing section formulates some promising research directions relating to wavelets in statistics.

Journal ArticleDOI
TL;DR: In this paper, a hidden Markov model-based (HMM-based) utterance verification system using the framework of statistical hypothesis testing is described. But the proposed verification technique was integrated into a state-of-the-art connected digit recognition system, and the string error rate for valid digit strings was found to decrease by 57% when setting the rejection rate to 5% and was able to correctly reject over 999% of nonvocabulary word strings.
Abstract: Utterance verification represents an important technology in the design of user-friendly speech recognition systems It involves the recognition of keyword strings and the rejection of nonkeyword strings This paper describes a hidden Markov model-based (HMM-based) utterance verification system using the framework of statistical hypothesis testing The two major issues on how to design keyword and string scoring criteria are addressed For keyword verification, different alternative hypotheses are proposed based on the scores of antikeyword models and a general acoustic filler model For string verification, different measures are proposed with the objective of detecting nonvocabulary word strings and possibly erroneous strings (so-called putative errors) This paper also motivates the need for discriminative hypothesis testing in verification One such approach based on minimum classification error training is investigated in detail When the proposed verification technique was integrated into a state-of-the-art connected digit recognition system, the string error rate for valid digit strings was found to decrease by 57% when setting the rejection rate to 5% Furthermore, the system was able to correctly reject over 999% of nonvocabulary word strings

Journal ArticleDOI
TL;DR: An approach to significance testing by the direct interpretation of likelihood is defined, developed and distinguished from the traditional forms of tail-area testing and Bayesian testing.
Abstract: An approach to significance testing by the direct interpretation of likelihood is defined, developed and distinguished from the traditional forms of tail-area testing and Bayesian testing. The emphasis is on conceptual issues. Some theoretical aspects of the new approach are sketched in the two cases of simple vs. simple hypotheses and simple vs. composite hypotheses.

Journal ArticleDOI
TL;DR: In this paper, the use of bootstrap methods to compute interval estimates and perform hypothesis tests for decomposable measures of economic inequality is considered, using the Gini coefficient and Theil's entropy measures of inequality.
Abstract: SUMMARY In this paper we consider the use of bootstrap methods to compute interval estimates and perform hypothesis tests for decomposable measures of economic inequality. Two applications of this approach, using the Gini coefficient and Theil's entropy measures of inequality, are provided. Our first application employs preand post-tax aggregate state income data, constructed from the Panel Study of Income Dynamics. We find that although casual observation of the inequality measures suggests that the post-tax distribution of income is less equal among states than pre-tax income, none of these observed differences are statistically significant at the 10% level. Our second application uses the National Longitudinal Survey of Youth data to study youth inequality. We find that youth inequality decreases as the cohort ages, but between age-group inequality has increased in the latter half of the 1980s. The results suggest that (1) statistical inference is essential even when large samples are available, and (2) the bootstrap procedure appears to perform well in this setting. © 1997 by John Wiley & Sons, Ltd. J. appl. econom. 12: 133-150, 1997.

Journal ArticleDOI
TL;DR: In this article, the Bonferroni multiple testing method is extended to multiple testing and the posterior probability of the null hypothesis is adjusted by multiplying by k, the number of tests considered.
Abstract: SUMMARY Bayes/frequentist correspondences between the p-value and the posterior probability of the null hypothesis have been studied in univariate hypothesis testing situations. This paper extends these comparisons to multiple testing and in particular to the Bonferroni multiple testing method, in which p-values are adjusted by multiplying by k, the number of tests considered. In the Bayesian setting, prior assessments may need to be adjusted to account for multiple hypotheses, resulting in corresponding adjustments to the posterior probabilities. Conditions are given for which the adjusted posterior probabilities roughly correspond to Bonferroni adjusted p-values.

Journal ArticleDOI
TL;DR: In this paper, the mean and exponential statistics of Andrews and Ploberger (1994, Econometrica 62, 1383-141414) and the supremum statistic of Andrews (1993, ECONOMA 61, 821-856) were extended to allow trending and unit root regressors.
Abstract: In this paper, test statistics for detecting a break at an unknown date in the trend function of a dynamic univariate time series are proposed. The tests are based on the mean and exponential statistics of Andrews and Ploberger (1994, Econometrica 62, 1383–1414) and the supremum statistic of Andrews (1993, Econometrica 61, 821–856). Their results are extended to allow trending and unit root regressors. Asymptotic results are derived for both I(0) and I(1) errors. When the errors are highly persistent and it is not known which asymptotic theory (I(0) or I(1)) provides a better approximation, a conservative approach based on nearly integrated asymptotics is provided. Power of the mean statistic is shown to be nonmonotonic with respect to the break magnitude and is dominated by the exponential and supremum statistics. Versions of the tests applicable to first differences of the data are also proposed. The tests are applied to some macroeconomic time series, and the null hypothesis of a stable trend function is rejected in many cases.

Journal ArticleDOI
TL;DR: This paper showed that independent locational observations contain more spatial information than n autocorrelated observations and developed a statistical test of the null hypothesis that successive observations are independent, which is robust when used with data collected from utilization distributions that are not normal but sensitive to nonstationary distributions induced by shifts in centers of activity or variance-covariance structure.
Abstract: In a previous study, we showed that n independent locational observations contain more spatial information than n autocorrelated observations. We also developed a statistical test of the null hypothesis that successive observations are independent. Here, we expand our discussion of testing for independence by clarifying assumptions associated with the tests. Specifically, the tests are robust when used with data collected from utilization distributions that are not normal, but they are sensitive to nonstationary distributions induced by shifts in centers of activity or variance-covariance structure. We also used simulations to examine how negative bias in kernel and polygon estimators of home-range size is influenced by level of autocorrelation, sampling rate, sampling design, and study duration. Relative bias increased with increasing levels of autocorrelation and reduced sample sizes. Kernel (95%) estimates were less biased than minimum convex polygon estimates. The effect of autocorrelation is greatest when low levels of bias (> -5%) are desired. For percent relative bias in the range of -20% to -5%, though, collection of moderately autocorrelated data bears little cost in terms of additional loss of spatial information relative to an equal number of independent observations. Tests of independence, when used with stationary data, provide a useful measure of the rate of home-range use and a means of checking assumptions associated with analyses of habitat use. However, our results indicate that exclusive use of independent observations is unnecessary when estimating home-range size with kernel or polygon methods.

Journal ArticleDOI
TL;DR: In this paper, rank statistics are derived for testing the nonparametric hypotheses of no main effects, no interaction, and no factor effects in unbalanced crossed classifications, and a modification of the test statistics and approximations to their finite-sample distributions are also given.
Abstract: Factorial designs are studied with independent observations, fixed number of levels, and possibly unequal number of observations per factor level combination. In this context, the nonparametric null hypotheses introduced by Akritas and Arnold are considered. New rank statistics are derived for testing the nonparametric hypotheses of no main effects, no interaction, and no factor effects in unbalanced crossed classifications. The formulation of all results includes tied observations. Extensions of these procedures to higher-way layouts are given, and the efficacies of the test statistics against nonparametric alternatives are derived. A modification of the test statistics and approximations to their finite-sample distributions are also given. The small-sample performance of the procedures for two factors is examined in a simulation study. As an illustration, a real dataset with ordinal data is analyzed.

01 Jan 1997
TL;DR: In this article, the authors provide a general theory about the Poisson-Binomial distribution concerning its computation and applications, and as by-products, they propose new weighted sampling schemes for finite population, a new method for hypothesis testing in logistic regression, and a new algorithm for finding the maximum conditional likelihood estimate (MCLE) in case-control studies.
Abstract: The distribution of Z1 +···+ZN is called Poisson-Binomial if the Zi are independent Bernoulli random variables with not-all-equal probabilities of success. It is noted that such a distribution and its computation play an important role in a number of seemingly unrelated research areas such as survey sampling, case-control studies, and survival analysis. In this article, we provide a general theory about the Poisson-Binomial distribution concerning its computation and applications, and as by-products, we propose new weighted sampling schemes for finite population, a new method for hypothesis testing in logistic regression, and a new algorithm for finding the maximum conditional likelihood estimate (MCLE) in case-control studies. Two of our weighted sampling schemes are direct generalizations of the "sequential" and "reservoir" methods of Fan, Muller and Rezucha (1962) for simple random sampling, which are of interest to computer scientists. Our new algorithm for finding the MCLE in case-control studies is an iterative weighted least squares method, which naturally bridges prospective and retrospective GLMs.

Journal ArticleDOI
TL;DR: Using logical constraints among hypotheses and correlations among test statistics can greatly improve the power of step-down tests, and an algorithm for uncovering logically constrained subsets in a given dataset is described.
Abstract: Use of logical constraints among hypotheses and correlations among test statistics can greatly improve the power of step-down tests. An algorithm for uncovering these logically constrained subsets in a given dataset is described. The multiple testing results are summarized using adjusted p values, which incorporate the relevant dependence structures and logical constraints. These adjusted p values are computed consistently and efficiently using a generalized least squares hybrid of simple and control-variate Monte Carlo methods, and the results are compared to alternative stepwise testing procedures.

Journal ArticleDOI
TL;DR: In this article, the authors performed a meta-analysis to determine whether the first-mover advantage hypothesis is sensitive to the methods used, and they found that tests using market share as their performance measure were sharply and significantly more likely to find a first mover advantage than tests using other measures such as profitability or survival.
Abstract: A long-standing hypothesis is that firms that enter a market early "first movers" tend to have higher performance than their followers "first-mover advantage". Recently, researchers have begun to argue that the statistical tests that support this relationship are limited in their applicability. That is, it is suggested that because of the methods used, these tests show the relationship only for certain subsets of firms, markets, and types of performance. We performed a meta-analysis to determine whether the findings are in fact sensitive to the methods used. We discovered that tests using market share as their performance measure were sharply and significantly more likely to find a first-mover advantage than tests using other measures such as profitability or survival. Also significantly more likely to find an advantage were tests that sample from individually selected industries and those that include no measures of the entrants' competitive strength. Conversely, we found little evidence that "survivor bias" the exclusion of nonsurviving entrants from the sample affects a test's findings. The data further suggest that tests that use none of the questioned research practices will find a first-mover advantage no more often than can be accounted for by random statistical error alone.

Book
30 Nov 1997
TL;DR: This chapter discusses Bayesian Probabilities, Bayesian Methods, and the Foundations of Statistical Analysis, which focuses on Bayesian Estimation of Parameters.
Abstract: Introduction: Introduction. - Types of Uncertainty. - Taylor Series Expansion. - Applications. - Problems. - Data Description and Treatment: Introduction.- Classification of Data. - Graphical Description of Data. - Histograms and Frequency Diagrams. - Descriptive Measures. - Applications. - Problems. - Fundamentals Of Probability: Introduction. - Sample Spaces, Sets, and Events. - Mathematics of Probability. - Random Variables and Their Probability Distributions. - Moment.- Common Discrete Probability Distributions. - Common Continuous Probability Distributions. - Applications. - Problems. - Multiple Random Variables: Introduction. - Joint Random Variables and Their Probability Distributions. - Functions of Random Variables. - Applications. - Problems. - Fundamentals of Statistical Analysis: Introduction. - Estimation of Parameters. - Sampling Distributions. - Hypothesis Testing: Procedure. - Hypothesis Tests of Means. - Hypothesis Tests of Variances. - Confidence Intervals. - Sample-Size Determination. - Selection of Model Probability Distributions. - Applications. Problems. - Curve Fitting and Regression Analysis: Introduction. - Correlation Analysis. - Introduction to Regression. - Principle of Least Squares. - Reliability of the Regression Equation. - Reliability of Point Estimates of the Regression Coefficients. - Confidence Intervals of the Regression Equation. - Correlation Versus Regression. - Applications of Bivariate Regression Analysis. - Multiple Regression Analysis. - Regression Analysis of Nonlinear Models. - Applications. Problems. - Simulation: Introduction. - Monte Carlo Simulation. - Random Numbers. - Generation of Random Variables. - Generation of Selected Discrete Random Variables. - Generation of Selected Continuous Random Variables. - Applications. - Problems. - Reliability and Risk Analysis: Introduction. - Time to Failure. - Reliability of Components. - Reliability of Systems. - Risk-Based Decision Analysis. - Applications. - Problems. - Bayesian Methods: Introduction. - Bayesian Probabilities. - Bayesian Estimation of Parameters. - Bayesian Statistics. - Applications. - Problems. - Appendix A: Probability and Statistics Tables. - Appendix B: Values of the Gamma Function. - Subject Index.

Journal ArticleDOI
TL;DR: In this article, the authors argue that although there may be good reasons to give up the null hypothesis statistical test, these particular points made by Cohen are not among those reasons, and demonstrate the elegance and usefulness of the NHST.
Abstract: Jacob Cohen (1994) raised a number of questions about the logic and information value of the null hypothesis statistical test (NHST). Specifically, he suggested that: (a) The NHST does not tell us what we want to know; (b) the null hypothesis is always false; and (c) the NHST lacks logical integrity. It is the author's view that although there may be good reasons to give up the NHST, these particular points made by Cohen are not among those reasons. When addressing these points, the author also attempts to demonstrate the elegance and usefulness of the NHST.

Journal ArticleDOI
TL;DR: In this paper, a conceptual introduction to two methods of testing group differences on a latent variable: group code analysis and structured means analysis is presented. But neither of these methods is suitable for the problem of group classification.
Abstract: This article serves as a conceptual introduction to two methods of testing group differences on a latent variable: group code analysis and structured means analysis.