scispace - formally typeset
Search or ask a question

Showing papers in "Methodology: European Journal of Research Methods for The Behavioral and Social Sciences in 2005"


Journal ArticleDOI
TL;DR: In this paper, a simulation study is used to determine the influence of different sample sizes at the group level on the accuracy of the estimates (regression coefficients and variances) and their standard errors.
Abstract: An important problem in multilevel modeling is what constitutes a sufficient sample size for accurate estimation. In multilevel analysis, the major restriction is often the higher-level sample size. In this paper, a simulation study is used to determine the influence of different sample sizes at the group level on the accuracy of the estimates (regression coefficients and variances) and their standard errors. In addition, the influence of other factors, such as the lowest-level sample size and different variance distributions between the levels (different intraclass correlations), is examined. The results show that only a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second-level standard errors. In all of the other simulated conditions the estimates of the regression coefficients, the variance components, and the standard errors are unbiased and accurate.

2,931 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of a nonparametric alternative to one of the standard parametric test statistics when these two assumptions are not met, and show that when the assumption of homogeneous covariance matrices is not met the non-parametric approach has a lower type I error rate and higher power than the most robust parametric statistic.
Abstract: . Multivariate analysis of variance (MANOVA) is a useful tool for social scientists because it allows for the comparison of response-variable means across multiple groups. MANOVA requires that the observations are independent, the response variables are multivariate normally distributed, and the covariance matrix of the response variables is homogeneous across groups. When the assumptions of normality and homogeneous covariance matrices are not met, past research has shown that the type I error rate of the standard MANOVA test statistics can be inflated while their power can be attenuated. The current study compares the performance of a nonparametric alternative to one of the standard parametric test statistics when these two assumptions are not met. Results show that when the assumption of homogeneous covariance matrices is not met, the nonparametric approach has a lower type I error rate and higher power than the most robust parametric statistic. When the assumption of normality is untenable, th...

190 citations


Journal ArticleDOI
TL;DR: This paper reviewed both the main criticisms of the method as well as the alternatives which have been put forward to complement or replace it and concluded that rigorous research activity requires use of NHST in the appropriate context, the complementary use of other methods which provide information about as...
Abstract: . Null hypothesis significance testing (NHST) is one of the most widely used methods for testing hypotheses in psychological research. However, it has remained shrouded in controversy throughout the almost seventy years of its existence. The present article reviews both the main criticisms of the method as well as the alternatives which have been put forward to complement or replace it. It focuses basically on those alternatives whose use is recommended by the Task Force on Statistical Inference (TFSI) of the APA (Wilkinson and TFSI, 1999) in the interests of improving the working methods of researchers with respect to statistical analysis and data interpretation. In addition, the arguments used to reject each of the criticisms levelled against NHST are reviewed and the main problems with each of the alternatives are pointed out. It is concluded that rigorous research activity requires use of NHST in the appropriate context, the complementary use of other methods which provide information about as...

70 citations


Journal ArticleDOI
Rolf Steyer1
TL;DR: In this article, the authors extend Rubin's concepts of individual and average causal effects by replacing Rubin's deterministic potential-outcome variables by the stochastic expected outcome variables, and introduce specific designs and models which allow identification of the variance of the individual causal effects.
Abstract: . Although both individual and average causal effects are defined in Rubin's approach to causality, in this tradition almost all papers center around learning about the average causal effects. Almost no efforts deal with developing designs and models to learn about individual effects. This paper takes a first step in this direction. In the first and general part, Rubin's concepts of individual and average causal effects are extended replacing Rubin's deterministic potential-outcome variables by the stochastic expected-outcome variables. Based on this extension, in the second and main part specific designs, assumptions and models are introduced which allow identification of (1) the variance of the individual causal effects, (2) the regression of the individual causal effects on the true scores of the pretests, (3) the regression of the individual causal effects on other explanatory variables, and (4) the individual causal effects themselves. Although random assignment of the observational unit to o...

63 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the Bartlett-score estimates are most appropriate when factor interpretation is based on the factor pattern, which is usually the case in confirmatory factor analysis.
Abstract: . Because of factor score indeterminacy, there can be substantial shifts in the theoretical meaning of factors and their corresponding score estimates. Therefore, the original factor pattern should be compared with the regression-component loadings (Schonemann & Steiger, 1976) corresponding to the factor-score estimates in order to detect possible shifts in the theoretical meaning. Especially with large loading matrices the similarity of the original factor pattern and the regression components of the score estimates may be ascertained by means of congruency coefficients. It is shown that these congruencies contain information that is not already given by measures of factor-score indeterminacy. Two examples illustrate the use of regression-component analysis for different types of factor-score estimates. The analyses reveal that the Bartlett-score estimates are most appropriate when factor interpretation is based on the factor pattern, which is usually the case in confirmatory factor analysis.

14 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a joint correspondence analysis model for the full K-way distribution by generalizing the correspondence analysis for three-way tables proposed by Choulakian (1988a, 1988b).
Abstract: . Parameter estimation in joint correspondence analysis (JCA) is typically performed by weighted least squares using the Burt matrix as the data matrix. In this paper, we show how to estimate the JCA model by means of maximum likelihood. For that purpose, JCA is defined as a model for the full K-way distribution by generalizing the correspondence analysis model for three-way tables proposed by Choulakian (1988a, 1988b). The advantage of placing JCA in a more formal statistical framework is that standard chi-squared tests can be applied to assess the goodness-of-fit of unrestricted and restricted models.

10 citations



Journal ArticleDOI
TL;DR: The relevance of statistical time to event analysis in the social sciences has proved to be of great importance in the last few years, especially in applications related to labor-market analysis, employment and/or unemployment issues, duration of strikes, and survival of new firms as mentioned in this paper.
Abstract: . The relevance of statistical time to event analysis in the social sciences has proved to be of great importance in the last few years, especially in applications related to labor-market analysis, employment and/or unemployment issues, duration of strikes, and survival of new firms, and in financial applications related to the time a company spends in a given status, for example, bankruptcy. We review some of the techniques that have proved to be adequate for analyzing this type of data and the conditions they require for their proper use. In addition, we extend these techniques in order to be able to analyze specific and more complex situations by using a more general and flexible model. All of these techniques and their extensions are illustrated with an example that studies the duration of firms under bankruptcy in the United States.

3 citations