scispace - formally typeset
Search or ask a question

Showing papers in "British Journal of Mathematical and Statistical Psychology in 1994"


Journal ArticleDOI
TL;DR: In this paper, two estimators in the factor analysis of categorical items are studied, the weighted least squares function implemented in the tandem PRELIS-LISREL 7 and a generalized least squares functions implemented in LISCOMP.
Abstract: Two estimators in the factor analysis of categorical items are studied, the weighted least squares function implemented in the tandem PRELIS-LISREL 7 and a generalized least squares function implemented in LISCOMP. Of main interest is the performance of these estimators in relatively small samples (200 to 400) and the comparison of their performance with the normal theory maximum likelihood estimator given an increasing number of response categories. The evaluation of the performance of these estimators concerns the variability of the parameter estimates, the bias of the parameter estimates, the distribution of the parameter estimates and the χ2 goodness-of-fit statistics. The model used in the simulation is an 8-indicator single common factor model. The effect of model size (12- and 16-indicator models) on the categorical item estimator of LISREL 7 is investigated briefly. The results indicate that in the ideal circumstances of the simulation study, 200 is too small a sample size to justify the use of large sample statistics associated with these estimators.

284 citations


Journal ArticleDOI
TL;DR: It is shown that the bootstrap correction of additive bias on the ADF test statistic yields the desired tail behaviour as the sample size reaches 500 for a 15-variable-3-factor confirmatory factor-analytic model, even if the distribution of the observed variables is not multivariate normal and the latent factors are dependent.
Abstract: The asymptotically distribution-free (ADF) test statistic for covariance structure analysis (CSA) has been reported to perform very poorly in simulation studies, i.e. it leads to inaccurate decisions regarding the adequacy of models of psychological processes. It is shown in the present study that the poor performance of the ADF test statistic is due to inadequate estimation of the weight matrix (W = gamma -1), which is a critical quantity in the ADF theory. Bootstrap procedures based on Hall's bias reduction perspective are proposed to correct the ADF test statistic. It is shown that the bootstrap correction of additive bias on the ADF test statistic yields the desired tail behaviour as the sample size reaches 500 for a 15-variable-3-factor confirmatory factor-analytic model, even if the distribution of the observed variables is not multivariate normal and the latent factors are dependent. These results help to revive the ADF theory in CSA.

104 citations


Journal ArticleDOI
TL;DR: In this article, a new method (SCA-S) is developed for simultaneous component analysis in such a way that for each set essentially the same component structure is found (sca-S).
Abstract: The present paper discusses several methods for (simultaneous) component analysis of scores of two or more groups of individuals on the same variables. Some existing methods are discussed, and a new method (SCA-S) is developed for simultaneous component analysis in such a way that for each set essentially the same component structure is found (SCA-S). This method is compared to alternative methods for analysing such data which employ the same component weights matrix (SCA-W) or the same pattern matrix (SCA-P) across data sets. Among these methods, SCA-W always explains the highest amount of variance, SCA-S the lowest, and SCA-P takes the position in between. These explained variances can be compared to the amount of variance explained by separate PCAs. Implications of such fit differences are discussed. In addition, it is shown how, for cases where SCA-S does not fit well, one can use SCA-W (and SCA-P) to find out if and how correlational structures differ. Finally, some attention is paid to facilitating the interpretation of an SCA-S solution. Like the other SCA methods, SCA-S has rotational freedom. This rotational freedom is exploited in a specially designed simple structure rotation technique for SCA-S. This technique is illustrated on an empirical data set.

92 citations


Journal ArticleDOI
TL;DR: Examination of visual information processing under a stressor of recurring loud sound among groups divided according to psychometrically identified stress susceptibility found disruption by stress among susceptible subjects of performance-enhancing strategies of deploying processing resources across the different task components.
Abstract: This study examined visual information processing under a stressor of recurring loud sound among groups divided according to psychometrically identified stress susceptibility. Formal models of task performance were employed to address several issues concerning stress effects on cognitive functioning. Examined were effects on parallel versus serial processing structure, task-wise processing capacity, strategies of allocating processing resources to task components, and curtailment of processing of relevant task elements. Contrary to prediction, stressor presence generated slightly more rather than less evidence of a parallel versus serial processing structure. There was some suggestion of central-task capacity depletion among more susceptible subjects, in line with certain theoretical positions. Evidence of curtailed exhaustive processing of relevant stimulus items was negative. Most notable was the disruption by stress among susceptible subjects of performance-enhancing strategies of deploying processing resources across the different task components (elements of the visual display and within-trial stages of processing). Such effects have received relatively little attention in this research domain; their investigation is shown to be made tractable, however, through the application of selected formal models of information processing.

72 citations


Journal ArticleDOI
TL;DR: In this article, a least squares strategy is developed for representing a symmetric proximity matrix containing similarity or dissimilarity values between each pair of objects from some given set, as an approximate sum of a small number of symmetric matrices having the same size as the original but which satisfy certain simple order constraints on their entries.
Abstract: A least-squares strategy is developed for representing a symmetric proximity matrix containing similarity or dissimilarity values between each pair of objects from some given set, as an approximate sum of a small number of symmetric matrices having the same size as the original but which satisfy certain simple order constraints on their entries. The primary class of constraints considered are of the Robinson (or anti-Robinson) type, where the entries in such a matrix, subject to a suitable row/column ordering, never increase (or decrease) when moving away from a main diagonal entry within any row or column. Matrices satisfying either the Robinson or anti-Robinson condition can be viewed as defining certain restricted collections of possibly overlapping subsets along with an associated measure of ‘compactness’ or ‘salience’ for each; these subsets and their compactness or salience indices form the basis for helping explain the patterning of entries in the initial proximity matrix as now reflected by the matrix sum. A number of empirical examples based on well-known published data sets are used as illustrations of how such reconstructions might be carried out and interpreted. Finally, several other types of matrix order constraints are mentioned briefly, along with a few corresponding numerical examples, to show how alternative structures also can be considered using the same type of computational strategy as in the (anti-)Robinson case.

44 citations


Journal ArticleDOI
TL;DR: The results indicate that when sample sizes are unequal and dispersion matrices are unequal, using theImproved general approximation or revised improved general approximation test of the interaction allows better control of the Type I error rate than does the -adjusted test.
Abstract: For split plot designs exact univariate F tests of the within-subjects main effect and the between × within interaction are based on the assumption of multi-sample sphericity. Type I error rates are reported for three tests designed for use when multisample sphericity is violated: the general approximation test, the improved general approximation test and a revised improved general approximation test. The results indicate that when sample sizes are unequal and dispersion matrices are unequal, using the improved general approximation or revised improved general approximation test of the interaction allows better control of the Type I error rate than does the -adjusted test.

35 citations


Journal ArticleDOI
TL;DR: In this paper, a comparison of results from two methods for estimating and testing a model for the factor analysis of dichotomous variables is presented, and substantial differences between the full-information and limited-information methods become apparent in results from the test of fit.
Abstract: This paper presents a comparison of results from two methods for estimating and testing a model for the factor analysis of dichotomous variables. For k manifest dichotomous variables, the data can be crossclassified to form a vector of 2 k frequencies, and nonlinear methods that use the full information in these 2 k frequencies are available for factor analysis. In addition, another method that uses only the limited information in the first-, and second-order marginal frequencies is available for the same model. As k becomes larger, substantial differences between the full-information and limited-information methods become apparent in results from the test of fit. For large k, Type I and Type II error rates may be higher in the fullineormation approach, because as the vector of 2 k frequencies becomes sparse, the chi-square approximation for the distribution of the goodness-offit test statistic becomes poorer

26 citations


Journal ArticleDOI
TL;DR: In this article, a simulation study was conducted to compare different estimation methods and goodness-of-fit statistics for multinomial models and the results indicated that the optimal choices are not always the maximum likelihood estimation method and the log-likelihood ratio statistic.
Abstract: The application of multinomial models to describe psychological data usually involves small sample sizes. In these circumstances, the asymptotic properties of the parameter estimation method and goodness-of-fit statistics designed for use with multinomial models may not hold up. This paper illustrates the design of a simulation study that will allow researchers and practitioners to determine which of the available estimation methods and goodness-of-fit statistics for multinomial models has better finite-sample properties for the model and sample size of concern. All of the estimation methods and goodness-of-fit statistics considered are members of the family of power divergence measures defined by Cressie & Read (1984), which include maximum likelihood and minimum chi-square among other estimation methods. Criteria for comparing the methods and statistics include the accuracy of the estimates and the closeness of the asymptotic significance levels of the statistics to their exact finite-sample levels. A number of simulations are carried out in order to investigate these properties for different models and sample sizes. In addition to illustrating the procedure for carrying out the comparison, the results found along the way indicate that the optimal choices are not always the maximum likelihood estimation method and the log-likelihood ratio statistic.

24 citations


Journal ArticleDOI
TL;DR: In this article, an alternative to multiple regression that is appropriate when the dependent variable is ordinal is suggested, by treating the problem as one in discriminant analysis by discriminating the pairs of subjects whose ordinal relations are in one direction from those with relations in the other.
Abstract: An alternative to multiple regression that is appropriate when the dependent variable is ordinal is suggested. The goal of the system is to predict correctly as many as possible of the binary ordinal relations on the dependent variable. This can be done by treating the problem as one in discriminant analysis by discriminating the pairs of subjects whose ordinal relations are in one direction from those with relations in the other. The bases of prediction can be raw score differences on predictors, their rank differences, or their directions of difference. For each, it is possible to find a system of weights that approximately maximizes discrimination. These turn out to depend on the variables' co-variances, on their rank correlations, and on their tau correlations, respectively. It is also possible to estimate the odds that any given relation is in a particular direction. A solution for the weights that exactly maximizes probability of correct ordinal prediction is available in the case of predicting from directions of difference. An example is given.

24 citations


Journal ArticleDOI
TL;DR: In the context of a random effects model, the effect size, as measured by the intraclass correlation, might be small due to outliers or heavy-tailed distributions rather than a lack of differences among the groups being compared as discussed by the authors.
Abstract: A well-known result is that slight departures from normality can have a large effect on the usual correlation coefficient rendering the magnitude of the correlation difficult to interpret and potentially misleading. In the context of a random effects model, which is the focus of attention in this paper, this means that effect size, as measured by the intraclass correlation, might be small due to outliers or heavy-tailed distributions rather than a lack of differences among the groups being compared. Similarly, a large intraclass correlation might be due to trivial shifts away from normality which would become small if an adjustment for non-normality were made. Moreover, this problem has to do with the effects of non-normality on population parameters, not just statistics, so problems can arise even with large sample sizes. This follows almost immediately from results in Tukey (1960), and it is briefly illustrated here. One approach to this problem is to use a Winsorized analogue of the intraclass correlation. This paper suggests three ways the Winsorized intraclass correlation might be estimated and compares them via simulations. A bivariate generalization of the random effects model is also considered, and two methods of estimating the group-level correlation are described and compared. Alternatives to Winsorization are also discussed.

12 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show how bootstrapping gives a better idea of the sampling distribution of the estimators, and can also allow an assessment of the reliability of the scoring of individuals on the latent scale.
Abstract: Estimated asymptotic variances for the estimates of the parameters in a logit-probit model for binary response data are unreliable for moderate sized samples. We show how bootstrapping gives a better idea of the sampling distribution of the estimators, and can also allow an assessment of the reliability of the scoring of individuals on the latent scale.

Journal ArticleDOI
TL;DR: In this article, a re-parameterization of the restricted factor analysis model for multrait multimethod (MTMM) data has been proposed, which accommodates multiplicative as well as additive relationships in MTMM matrices.
Abstract: Some limitations in existing covariance structure models for multitrait multimethod (MTMM) data are discussed as an introduction to a re-parameterization of the restricted factor analysis model. Unlike existing specifications of this model, the reparameterized model is consistent with Campbell & Fiske's underlying rationale for the MTMM design. It accommodates multiplicative as well as additive relationships in MTMM matrices and provides information concerning the extent to which any matrix conforms to Campbell & Fiske's (1959) four requirements for convergent and discriminant validity. It can also indicate the degree of trait variance in measures that is independent of method effects. Emphasis is placed on the specification of factor correlation matrices which accords with the hypothesized relationships among both traits and methods in an MTMM study. These suggestions are demonstrated by an application of the reparameterized model to an exemplar MTMM matrix which restricted factor analysis has previously found difficult in providing an interpretable solution.

Journal ArticleDOI
TL;DR: In this article, the authors examined three simple transformations which increase the power of F tests of main effects and interaction in 2 × 2 factorial designs under violation of normality, and found that detection and downweighting of outliers before performing the F test was more effective than rank methods for several distributions.
Abstract: This study examined three simple transformations which increase the power of F tests of main effects and interaction in 2 × 2 factorial designs under violation of normality. The study obtained Monte Carlo results from various heavy-tailed densities, including mixed-normal, exponential, Cauchy and Laplace densities, which are associated with grossly distorted probabilities of Type I and Type II errors. Transformation of scores to ranks made the F test for interaction robust and comparable in power to the Mann—Whitney-Wilcoxon test for the same distributions. Transformation to ‘modular ranks’, having one-fourth the number of values of conventional ranks, was equally effective. Detection and downweighting of outliers before performing the F test was more effective than rank methods for several distributions. Implications of these findings for the role of scales of measurement and nonparametric methods in psychological research are discussed.

Journal ArticleDOI
TL;DR: In this article, a method of computing reasonably accurate confidence intervals for the slope of two resistant regression methods: the biweight midregression and Winsorized regression was proposed, and a relatively unobvious method was found to perform reasonably well when using bi-weight mid-regression, and fairly well for the other regression method except for one extreme situation described in the paper.
Abstract: Robust and resistant regression has taken on new importance in recent years with the realization that psychometric measures often have heavy-tailed distributions with extreme outliers. While many resistant regression methods are available, little is known about how a researcher might compute a confidence interval for the corresponding parameters. The primary goal in this paper is finding a method of computing reasonably accurate confidence intervals for the slope of two resistant regression methods: the biweight midregression and Winsorized regression. Several ‘obvious’ bootstrap methods for computing confidence intervals were found to be highly unsatisfactory. A relatively unobvious method was found to perform reasonably well when using biweight midregression, and fairly well for the other regression method except for one extreme situation described in the paper. In terms of power, no situation was found where ordinary least squares is to be preferred over Winsorized regression, while Winsorized regression can have a substantial advantage over ordinary least squares. There are situations where the biweight midregression has substantially more power than ordinary least squares, but when the predictor has a highly skewed distribution, there are situations where the reverse is true. The relative merits of the two resistant regression methods are discussed.

Journal ArticleDOI
TL;DR: In this paper, two scaling procedures for measurement of asymmetry observed in comparative judgement in terms of three ordered categories are presented. But they do not discuss the applicability of these scaling procedures in comparison with related work.
Abstract: This paper gives two scaling procedures for measurement of asymmetry observed in comparative judgement in terms of three ordered categories. With a revised law of comparative judgement, a least squares solution and a maximum likelihood solution are suggested for the parameter estimation with some statistical tests. Numerical examples are illustrated, and features and applicability of the present work are discussed in comparison with related work.

Journal ArticleDOI
TL;DR: This paper proposes a trace strength model that allows computation of relatively independent measures of recency and familiarity and closes with some speculations on the possible roles of context and interference in this type of system.
Abstract: One problem with trace strength models of recognition is that they typically confound the recency of a stimulus with its frequency of occurrence (familiarity). This paper proposes a trace strength model that allows computation of relatively independent measures of recency and familiarity. It closes with some speculations on the possible roles of context and interference in this type of system.

Journal ArticleDOI
TL;DR: In this paper, the problem of performing all pairwise comparisons of column means for an additive non-orthogonal two-by-four factorial ANOVA model where cell variances were heterogeneous was considered.
Abstract: This study considered the problem of performing all pairwise comparisons of column means for an additive non-orthogonal two-by-four factorial ANOVA model where cell variances were heterogeneous. Extensions of the Games & Howell (1976) procedure, the Dunnett (1980) T3 and C procedures, the Holland & Copenhaver (1987) technique, the Hayter (1986) procedure, and the James (1951) second-order test were considered. Using computer-simulated data, Type I error rates and statistical power for these multiple comparison procedures were estimated. Examined in this study were 132 different combinations of sample size, variance patterns, group mean patterns, and design types. The family-wise Type I error rate for each of these procedures was generally maintained under the nominal .05 level. In terms of statistical power, the Games—Howell procedure generally provided the greatest any-pair power, while the extension of the Hayter technique provided the greatest average power per contrast and was most efficient in identifying all significant pairwise differences (all-pairs power).