# Showing papers in "Psychometrika in 1970"

••

Bell Labs

^{1}TL;DR: In this paper, an individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common "psychological space" and a corresponding method of analyzing similarities data is proposed, involving a generalization of Eckart-Young analysis to decomposition of three-way (or higher-way) tables.

Abstract: An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations.

4,520 citations

••

3,969 citations

••

TL;DR: In this article, a general method for estimating the unknown coefficients in a set of linear structural equations is described, which allows for both errors in equations (residuals, disturbances) and errors in variables (errors of measurement, observational errors).

Abstract: A general method for estimating the unknown coefficients in a set of linear structural equations is described. In its most general form the method allows for both errors in equations (residuals, disturbances) and errors in variables (errors of measurement, observational errors) and yields estimates of the residual variance-covariance matrix and the measurement error variances as well as estimates of the unknown coefficients in the structural equations, provided all these parameters are identified. Two special cases of this general method are discussed separately. One is when there are errors in equations but no errors in variables. The other is when there are errors in variables but no errors in equations. The methods are applied and illustrated using artificial, economic and psychological data.

1,119 citations

••

TL;DR: In this article, a method of estimating the parameters of the normal ogive model for dichotomously scored item-responses by maximum likelihood is demonstrated, which requires numerical integration in order to evaluate the likelihood equations, but is shown to be straightforward in other respects.

Abstract: A method of estimating the parameters of the normal ogive model for dichotomously scored item-responses by maximum likelihood is demonstrated. Although the procedure requires numerical integration in order to evaluate the likelihood equations, a computer implemented Newton-Raphson solution is shown to be straightforward in other respects. Empirical tests of the procedure show that the resulting estimates are very similar to those based on a conventional analysis of item “difficulties” and first factor loadings obtained from the matrix of tetrachoric correlation coefficients. Problems of testing the fit of the model, and of obtaining invariant parameters are discussed.

619 citations

••

TL;DR: In this paper, a least square method is presented for fitting a given matrixA to another given matrixB under choice of an unknown rotation, an unknown translation, and an unknown central dilation.

Abstract: A least squares method is presented for fitting a given matrixA to another given matrixB under choice of an unknown rotation, an unknown translation, and an unknown central dilation. The procedure may be useful to investigators who wish to compare results obtained with nonmetric scaling techniques across samples or who wish to compare such results with those obtained by conventional factor analytic techniques on the same sample.

405 citations

••

TL;DR: Measures of test parsimony and factor parsimony are defined Minimizing their weighted sum produces a general rotation criterion for either oblique or orthogonal rotation The quartimax, varimax and equamax criteria are special cases of the expression Two new criteria are developed as discussed by the authors.

Abstract: Measures of test parsimony and factor parsimony are defined Minimizing their weighted sum produces a general rotation criterion for either oblique or orthogonal rotation The quartimax, varimax and equamax criteria are special cases of the expression Two new criteria are developed One of these, the parsimax criterion, apparently gives excellent results It is argued that one of the most important factors bearing on the choice of a rotation criterion for a particular problem is the amount of information available on the number of factors that should be rotated

171 citations

••

TL;DR: If the ratio of the degrees of freedom of the data to that of the coordinates is sufficiently large then metric information is recovered even when random error is present; and when the number of points being scaled increases the stress of the solution increases even though the degree of metric determinacy increases.

Abstract: The degree of metric determinancy afforded by nonmetric multidimensional scaling was investigated as a function of the number of points being scaled, the true dimensionality of the data being scaled, and the amount of error contained in the data being scaled. It was found 1) that if the ratio of the degrees of freedom of the data to that of the coordinates is sufficiently large then metric information is recovered even when random error is present; and 2) when the number of points being scaled increases the stress of the solution increases even though the degree of metric determinacy increases.

157 citations

••

TL;DR: In this article, various statistical models for simplex structures are formulated in terms of the well-known Wiener and Markov stochastic processes, and a distinction is made between a perfect simplex and a quasi simplex.

Abstract: Various statistical models for simplex structures are formulated in terms of the well-known Wiener and Markov stochastic processes. A distinction is made between a perfect simplex and a quasi simplex. For each model the problems of Identification and estimation of the parameters and that of testing the goodness of fit of the model are considered. All models may be estimated by a general method for covariance structures developed by Joreskog (1970), but in some cases simpler methods may be used, in which case these are presented.

149 citations

••

TL;DR: In this article, the authors used the knowledge of the parameters to obtain estimates alternative to the usual ones, where the assumption of equality of variances within groups is omitted, and new estimates of the individual variances are also derived.

Abstract: This paper is concerned with estimation problems where there are several parameters all of the same type–for example, a set of means. It often happens that the knowledge of the parameters is exchangeable in the sense of de Finetti. This prior knowledge can be used to obtain estimates alternative to the usual ones. The particular problem studied here is the familiar analysis between and within groups. New estimates of the group means are obtained, and when the assumption of equality of variances within groups is omitted, new estimates of the individual variances are also derived.

119 citations

••

TL;DR: In this article, the problem of locating two sets of points in a joint space, given the Euclidean distances between elements from distinct sets, is solved algebraically for error free data, for fallible data it has least squares properties.

Abstract: The problem of locating two sets of points in a joint space, given the Euclidean distances between elements from distinct sets, is solved algebraically. For error free data the solution is exact, for fallible data it has least squares properties.

118 citations

••

TL;DR: In this paper, the latent or true nature of subjects is identified with a limited number of response patterns (the Guttman scale patterns), and the probability of an observed response pattern can be written as the sum of products of the true type multiplied by the chance of sufficient response error to cause the observed pattern to appear.

Abstract: By proposing that the latent or true nature of subjects is identified with a limited number of response patterns (the Guttman scale patterns), the probability of an observed response pattern can be written as the sum of products of the probability of the true type multiplied by the chance of sufficient response error to cause the observed pattern to appear. This model contains the proportions of the true types as parameters plus some misclassification probabilities as parameters. Maximum likelihood methods are used to make estimates and test the fit for some examples.

••

TL;DR: In this paper, the authors extended the Gourevitch and Galanter's large two sample test to a K-sample detection test and used the post hoc confidence interval procedure described in this paper to locate possible statistically significant sources of variance and differences.

Abstract: The basic models of signal detection theory involve the parametric measure,d′, generally interpreted as a detectability index. Given two observers, one might wish to know whether their detectability indices are equal or unequal. Gourevitch and Galanter (1967) proposed a large sample statistical test that could be used to test the hypothesis of equald′ values. In this paper, their large two sample test is extended to aK-sample detection test. If the null hypothesisd
1′=d
2′=...=d
K
′ is rejected, one can employ the post hoc confidence interval procedure described in this paper to locate possible statistically significant sources of variance and differences. In addition, it is shown how one can use the Gourevitch and Galanter statistics to testd′=0 for a single individual.

••

TL;DR: Employing simulated data, several methods for estimating correlation and variance-covariance matrices are studied for observations missing at random from data matrices.

Abstract: Employing simulated data, several methods for estimating correlation and variance-covariance matrices are studied for observations missing at random from data matrices. The effect of sample size, number of variables, percent of missing data and average intercorrelations of variables are examined for several proposed methods.

••

TL;DR: In this paper, a model for factor analysing scores on a set of psychological tests administered as both pre- and postmeasures in a study of change is presented, where factors are defined to be orthogonal between as well as within occasions.

Abstract: A model is presented for factor analysing scores on a set of psychological tests administered as both pre- and postmeasures in a study of change. The model assumes that the same factors underlie the tests on each occasion, but that factor scores as well as factor loadings may change between occasions. Factors are defined to be orthogonal between as well as within occasions. A two-stage least squares procedure for fitting the model is described, and generally provides a unique rotation solution for the factors on each occasion.

••

••

••

TL;DR: Theoretical formulations of Gutmann and of Bakan were tested in three studies of sex differences in personality and support the agency-communion formulation as a framework for future inquiry.

Abstract: Theoretical formulations of Gutmann and of Bakan were tested in three studies of sex differences in personality. In Study 1, males were significantly more individualistic, objective, and distant in representations of self, others, space, and future. Study 2 found males predominantly “agentic” and females “communal” in reports of significant emotional experiences. In Study 3, seven general predictions from the agency-communion formulation were tested against 200 abstracts of published research on sex differences. The formulation was judged “relevant” to over 80% of the studies; significant differences were “confirming” of the formulation in 97% of “relevant” studies. Results indicate the importance of qualitative aspects of sex differences in personality and support the agency-communion formulation as a framework for future inquiry.

••

TL;DR: In this paper, item characteristic curves are estimated without restrictive assumptions about their mathematical form and the resulting curves are compared with estimates obtained under the assumption that all curves are of logistic form.

Abstract: Item characteristic curves are estimated without restrictive assumptions about their mathematical form. The resulting curves are compared with estimates obtained under the assumption that all curves are of logistic form. Surprising agreement is found between the curves obtained by the two unrelated methods.

••

TL;DR: For example, this paper found that students at colleges with high scores on the Faculty-Student Interaction scale more often overachieved on two criteria tests, while students with low scores on this scale underachieved in all three of the tests.

Abstract: In this study, selected aspects of the college environment were related to student academic achievement at 27 small liberal arts colleges. Academic achievement was measured by senior students' scores on the Area Tests of the Graduate Record Examination; the Scholastic Aptitude Test (Verbal and Mathematics) scores of these same students prior to college entrance were used as a control measure for differences in initial aptitude. The colleges' social and academic environment were assessed through students' perceptions and included five scales describing the extent of faculty-student interaction, student activism, curriculum flexibility, academic challenge, and the colleges' cultural facilities. All but the Activism scale were related to student over or underachievement on one or more of the three Area Tests (Humanities, Natural Science, Social Science). In particular, students at colleges with high scores on the Faculty-Student Interaction scale more often overachieved on two of the criteria tests, while students at colleges with low scores on this scale underachieved on all three of the tests. The results suggest that certain student-described college environmental features are related to academic achievement, although replication with another group of colleges would be desirable.

••

TL;DR: The Fletcher-Powell algorithm for minimizing a function of several variables and its use is discussed and illustrated and it is shown that the algorithm can be used also when the variables satisfy certain equality constraints.

Abstract: The Fletcher-Powell algorithm for minimizing a function of several variables is described. A package of FORTRAN IV subroutines that follows this algorithm with some modifications is given and its use is discussed and illustrated. It is shown that the algorithm can be used also when the variables satisfy certain equality constraints.

••

TL;DR: In this paper, the statistical relation between the sample and population characteristic vectors of correlation matrices with squared multiple correlations as communality estimates was investigated and the sampling fluctuations were found to relate only to differences in the square roots of characteristic roots and to sample size.

Abstract: Data are reported which show the statistical relation between the sample and population characteristic vectors of correlation matrices with squared multiple correlations as communality estimates. Sampling fluctuations were found to relate only to differences in the square roots of characteristic roots and to sample size. A principle for determining the number of factors to rotate and interpret after rotation is suggested.

••

TL;DR: For the case of two or more groups of variables (batteries), generalizations on three common factor models, the Equal Residual Variances model, the Image model of Joreskog, and the Canonical Factor Analysis model, are described and illustrated with empirical examples as discussed by the authors.

Abstract: For the case of two or more groups of variables (batteries), generalizations on three common factor models, the Equal Residual Variances model, the Image model of Joreskog, and the Canonical Factor Analysis model, are described and illustrated with empirical examples.

••

TL;DR: In this paper, a computer program for simultaneously factor analyzing dispersion matrices obtained from independent groups is described, which is useful when a battery of tests has been administered to samples of examinees from several populations and one wants to study similarities and differences in factor structure between the different populations.

Abstract: MF-$0.65 HC-$3.29 *Computer Programs; Factor Analysis; *Factor Structure; Hypothesis Testing; *Mathematical Models; *Research Methodology; *Sampling; Statistical Analysis A computer program for simultaneously factor analyzing dispersion matrices obtained from independent groups is described. This program is useful when a battery of tests has been administered to samples of examinees from several populations and one wants to study similarities and differences in factor structure between the different populations. (CK)

••

TL;DR: In this paper, a method for fitting a perfect simplex is proposed which is independent of the order of the manifest variables and is based on a procedure for scaling a set of points from their pairwise distances, which is reviewed in compact notation in the Appendix.

Abstract: A method for fitting a perfect simplex [Guttman, 1954] is suggested which, in contrast to Kaiser's [1962], is independent of the order of the manifest variables. It is based on a procedure for scaling a set of points from their pairwise distances [Torgerson, 1958; Young & Householder, 1938] which is reviewed in compact notation in the Appendix. The method is extended to an iterative algorithm for fitting a quasi-simplex. Some empirical results are included.

••

TL;DR: A method is introduced for oblique rotation to a pattern target matrix specified in advance and values are estimated by means of a general procedure for minimization with equality constraints.

Abstract: A method is introduced for oblique rotation to a pattern target matrix specified in advance. The target matrix may have all or only some of its elements specified. Values are estimated by means of a general procedure for minimization with equality constraints. Results are shown using data from Harman and Browne.

••

TL;DR: In this paper, a numerical method is presented for rotating a multi-dimensional configuration or factor solution so that the first few axes span the space of classes and the remaining axes span a space of quantitative variation.

Abstract: For certain kinds of structure consisting of quantitative dimensions superimposed on a discrete class structure, spatial representations can be viewed as being composed of two subspaces, the first of which reveals the discrete classes as isolated clusters and the second of which contains variation along the quantitative attributes. A numerical method is presented for rotating a multi-dimensional configuration or factor solution so that the first few axes span the space of classes and the remaining axes span the space of quantitative variation. The use of this method is then illustrated in the analysis of some experimental data.

••

TL;DR: A simple general theory for obtaining “two factor at a time” algorithms for any polynomial simplicity criteria satisfying a natural symmetry condition is presented and it is shown that the degree of any symmetric criterion must be a multiple of four.

Abstract: The quartimax and varimax algorithms for orthogonal rotation attempt to maximize particular simplicity criteria by a sequence of two-factor rotations. Derivations of these algorithms have been fairly complex. A simple general theory for obtaining “two factor at a time” algorithms for any polynomial simplicity criteria satisfying a natural symmetry condition is presented. It is shown that the degree of any symmetric criterion must be a multiple of four. A basic fourth degree algorithm, which is applicable to all symmetric fourth degree criteria, is derived and applied using a variety of criteria. When used with the quartimax and varimax criteria the algorithm is mathematically identical to the standard algorithms for these criteria. A basic eighth degree algorithm is also obtained and applied using a variety of eighth degree criteria. In general the problem of writing a basic algorithm for all symmetric criteria of any specified degree reduces to the problem of maximizing a trigonometric polynomial of degree one-fourth that of the criteria.

••

TL;DR: The test is self-scoring, and although different examinees take different sets of items, the scoring method provides comparable scores for all.

Abstract: Certain modifications of a conventional test can force the item difficulty level to adjust automatically to the ability level of the examinee. Although different examinees take different sets of items, the scoring method provides comparable scores for all. Furthermore, the test is self-scoring. These advantages are obtained without some of the usual disadvantages of tailored testing.

••

TL;DR: In this article, the discriminal distributions of signal-detectability theory evolve in time according to a normal Markov process, and they can be characterized by Brownian motion generalized with a constant bias determined by signal strength.

Abstract: If the discriminal distributions of signal-detectability theory evolve in time according to a normal Markov process, they can be characterized by Brownian motion generalized with a constant bias determined by signal strength. If the process is stopped at the first occurrence of a preset criterion displacement, the resulting latency distribution provides a model for the central component of simple reaction time. Discussed are properties of the distribution which should be useful in obtaining experimental predictions from neural-counting assumptions, and in relating reaction times to basic variables of the theory of signal-detectability.