scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 1974"


Journal ArticleDOI
TL;DR: In this article, an index of factorial simplicity, employing the quartimax transformational criteria of Carroll, Wrigley and Neuhaus, and Saunders, was developed.
Abstract: An index of factorial simplicity, employing the quartimax transformational criteria of Carroll, Wrigley and Neuhaus, and Saunders, is developed. This index is both for each row separately and for a factor pattern matrix as a whole. The index varies between zero and one. The problem of calibrating the index is discussed.

10,346 citations


Journal ArticleDOI
TL;DR: In this paper, the authors find that a number of challenging problems still remain to be overcome, even in the simplest case of the analysis of a single symmetric matrix of similarity estimates, and they are optimistic that efforts directed toward surmounting the remaining difficulties will reap both methodological and substantive benefits.
Abstract: After struggling with the problem of representing structure in similarity data for over 20 years, I find that a number of challenging problems still remain to be overcome—even in the simplest case of the analysis of a single symmetric matrix of similarity estimates. At the same time, I am more optimistic than ever that efforts directed toward surmounting the remaining difficulties will reap both methodological and substantive benefits. The methodological benefits that I forsee include both an improved efficiency and a deeper understanding of “discovery” methods of data analysis. And the substantive benefits should follow, through the greater leverage that such methods will provide for the study of complex empirical phenomena—perhaps particularly those characteristic of the human mind.

444 citations


Journal ArticleDOI
TL;DR: Several graphtheoretic criteria are proposed for use within a general clustering paradigm as a means of developing procedures “in between” the extremes of complete-link and single-link hierarchical partitioning.
Abstract: This paper attempts to review and expand upon the relationship between graph theory and the clustering of a set of objects. Several graphtheoretic criteria are proposed for use within a general clustering paradigm as a means of developing procedures “in between” the extremes of complete-link and single-link hierarchical partitioning; these same ideas are then extended to include the more general problem of constructing subsets of objects with overlap. Finally, a number of related topics are surveyed within the general context of reinterpreting and justifying methods of clustering either through standard concepts in graph theory or their simple extensions.

151 citations


Journal ArticleDOI
TL;DR: In this paper, a convenient method for utilizing the information provided by omissions is presented, and theoretical and empirical justifications for the estimates obtained by the new method are presented for the estimated ability and item parameters.
Abstract: Omitted items cannot properly be treated as wrong when estimating ability and item parameters. A convenient method for utilizing the information provided by omissions is presented. Theoretical and empirical justifications are presented for the estimates obtained by the new method.

147 citations


Journal ArticleDOI
TL;DR: In this paper, the rank order of the numbers in a column is determined by a linear rule of combination of latent quantities characterizing each row object with respect to a small number of underlying factors.
Abstract: The numbers in each column of ann ×m matrix of multivariate data are interpreted as giving the measured values of alln of the objects studied on one ofm different variables. Except for random error, the rank order of the numbers in such a column is assumed to be determined by a linear rule of combination of latent quantities characterizing each row object with respect to a small number of underlying factors. An approximation to the linear structure assumed to underlie the ordinal properties of the data is obtained by iterative adjustment to minimize an index of over-all departure from monotonicity. The method is “nonmetric” in that the obtained structure in invariant under monotone transformations of the data within each column. Except in certain degenerate cases, the structure is nevertheless determined essentially up to an affine transformation. Tests show (a) that, when the assumed monotone relationships are strictly linear, the recovered structure tends closely to approximate that obtained by standard (metric) factor analysis but (b) that, when these relationships are severely nonlinear, the nonmetric method avoids the inherent tendency of the metric method to yield additional, spurious factors. From the practical standpoint, however, the usefulness of the nonmetric method is limited by its greater computational cost, vulnerability to degeneracy, and sensitivity to error variance.

147 citations


Journal ArticleDOI
TL;DR: The homogeneous case of the continuous response model is expanded to the multi-dimensional latent space, and the normal ogive model is presented, and it is found that there is a vector of sufficient statistics for estimating the subject's vector of latent traits, given the item parameter vectors.
Abstract: The homogeneous case of the continuous response model is expanded to the multi-dimensional latent space, and the normal ogive model is presented. The operating density characteristic of the continuous item response and the vector of basic functions are developed. It is found out that there is a vector of sufficient statistics for estimating the subject's vector of latent traits, given the item parameter vectors. The relationship between the model and the linear factor analysis is observed. The matrix of item response information functions is introduced. Some additional observations are also made.

128 citations



Journal ArticleDOI
TL;DR: In this paper, a probabilistic, multidimensional version of Coombs' unfolding model is obtained by assuming that the projections of each stimulus and each individual on each axis are normally distributed.
Abstract: A probabilistic, multidimensional version of Coombs' unfolding model is obtained by assuming that the projections of each stimulus and each individual on each axis are normally distributed. Exact equations are developed for the single dimensional case and an approximate one for the multidimensional case. Both types of equations are expressed solely in terms of univariate normal distribution functions and are therefore easy to evaluate. A Monte Carlo experiment, involving 9 stimuli and 3 subjects in a 2 dimensional space, was run to determine the degree of accuracy of the multidimensional equation and the feasibility of using iterative methods to obtain maximum likelihood estimates of the stimulus and subject coordinates. The results reported here are gratifying in both respects.

101 citations


Journal ArticleDOI
TL;DR: In this paper, Monte Carlo procedures were used to investigate the properties of a nonmetric multidimensional scaling algorithm when used to scale an incomplete matrix of dissimilarities, and various recommendations for users who wish to scale incomplete matrices are made: (a) recovery was found to be satisfactory provided that the "degrees of freedom" ratio exceeded 3.5, irrespective of error level.
Abstract: Monte Carlo procedures were used to investigate the properties of a nonmetric multidimensional scaling algorithm when used to scale an incomplete matrix of dissimilarities. Various recommendations for users who wish to scale incomplete matrices are made: (a) recovery was found to be satisfactory provided that the “degrees of freedom” ratio exceeded 3.5, irrespective of error level; (b) cyclic designs were found to provide best recovery, although random patterns of deletion performed almost as well; and (c) strongly locally connected designs, specifically overlapping cliques, were generally inferior. These conclusions are based on 837 scaling solutions and are applicable to stimulus sets containing more than 30 objects.

95 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present examples of multivariate matching methods that will yield the same percent reduction in bias for each matching variable for a variety of underlying distributions, and for each one, matching methods are defined which are equal percent bias reducing.
Abstract: Multivariate matching methods are commonly used in the behavioral and medical sciences in an attempt to control bias when randomization is not feasible. Some examples of multivariate matching methods are discussed in Althauser and Rubin (1970) and Cochran and Rubin (1973), but otherwise seem to have received little attention in the literature. Here, we present examples of multivariate matching methods that will yield the same percent reduction in bias for each matching variable for a variety of underlying distributions. Eleven distributional cases are considered, and for each one, matching methods are defined which are equal percent bias reducing. Methods discussed in Section 8, which are based on the values of the estimated best linear discriminant or which define distance by a sample based inner-product, will probably be the most generally applicable in practice.

67 citations


Journal ArticleDOI
TL;DR: In this article, two new measures of fit are suggested which do not depend upon the norms of A and B, which are (0, 1)-bounded, and which, therefore, provide meaningful answers for comparative analyses.
Abstract: In connection with a least-squares solution for fitting one matrix,A, to another,B, under optimal choice of a rigid motion and a dilation, Schonemann and Carroll suggested two measures of fit: a raw measure,e, and a refined similarity measure,e s , which is symmetric. Both measures share the weakness of depending upon the norm of the target matrix,B,e.g.,e(A,kB) ≠e(A,B) fork ≠ 1. Therefore, both measures are useless for answering questions of the type: “DoesA fitB better thanA fitsC?”. In this note two new measures of fit are suggested which do not depend upon the norms ofA andB, which are (0, 1)-bounded, and which, therefore, provide meaningful answers for comparative analyses.

Journal ArticleDOI
TL;DR: In this paper, it is argued that Guttman's measure of indeterminacy is inconsistent with the foundations of the factor model in probability theory, and the traditional measures of factor determinacy used by earlier writers should be reinstated.
Abstract: Results obtained by Guttman [1955] on the determinacy of common factors have been thought to have disturbing consequences for the common factor model. It is argued that factors must be thought of as unobservable, and uniquely defined but numerically indeterminate. It follows that Guttman's measure of indeterminacy is inconsistent with the foundations of the factor model in probability theory, and the traditional measures of factor indeterminacy used by earlier writers should be reinstated. These yield no disturbing conclusions about the model.

Journal ArticleDOI
TL;DR: In this article, the theoretical foundations of a wide range of asymmetric association measures are discussed and new measures are also suggested, and fifteen of these association measures, some previously suggested, some new, are singled out for a computer-assisted numerical study in which they are compared under a wide variety of conditions.
Abstract: This paper discusses the general problem of measuring the association between an independent nominal-scaled variableX and a dependent variableY whose scale of measurement may be interval, ordinal or nominal. The theoretical foundations of a wide range of asymmetric association measures are discussed. Some new measures are also suggested. Fifteen of these association measures, some previously suggested, some new, are singled out for a computer-assisted numerical study in which we compute the value actually taken by each measure under a wide variety of conditions. This comparative study provides important insights into the behavior of the measures.

Journal ArticleDOI
TL;DR: In this article, a method of estimating the reliability of a test which has been divided into three parts is presented, where the parts are homogeneous in content (congeneric), i.e., if their true scores are linearly related and if sample size is large.
Abstract: This paper gives a method of estimating the reliability of a test which has been divided into three parts. The parts do not have to satisfy any statistical criteria like parallelism orτ-equivalence. If the parts are homogeneous in content (congeneric),i.e., if their true scores are linearly related and if sample size is large then the method described in this paper will give the precise value of the reliability parameter. If the homogeneity condition is violated then underestimation will typically result. However, the estimate will always be at least as accurate as coefficientα and Guttman's lower boundλ 3 when the same data are used. An application to real data is presented by way of illustration. Seven different splits of the same test are analyzed. The new method yields remarkably stable reliability estimates across splits as predicted by the theory. One deviating value can be accounted for by a certain unsuspected peculiarity of the test composition. Both coefficientα andλ 3 would not have led to the same discovery.

Journal ArticleDOI
TL;DR: Stevens' Law and Fechner's Law represent the most widely known quantifications of stimulus-response relations in all of experimental psychology as mentioned in this paper, and have been widely used in the literature.
Abstract: S. S. Stevens, professor of Psychophysics in Harvard University was born in Ogden, Utah on 4 November 1906, and died in Vail, Colorado on 18 January 1973. He was, without question, the strongest voice in psychophysics since G. T. Fechner. Indeed, Stevens’ Law and Fechner’s Law represent the most widely known quantifications of stimulus-response relations in all of experimental psychology.

Journal ArticleDOI
TL;DR: In this paper, four prominent oblique transformation techniques (promax, the Harris-Kaiser procedure, biquartimin, and direct oblimin) are examined and compared.
Abstract: Four prominent oblique transformation techniques—promax, the Harris-Kaiser procedure, biquartimin, and direct oblimin—are examined and compared. Additionally, two newly-developed procedures, falling into the category designated as Case III by Harris and Kaiser [1964], are presented and included in the comparisons. The techniques are compared in light of their freedom from bias in the interfactor correlations, and their ability to yield clear simple structures, over many data sets—some constructed and some “real”—varying widely in terms of number of variables and factors, factorial complexity, and clarity of the hyperplanes. Results are discussed, and implications for practice are noted.

Journal ArticleDOI
TL;DR: Results show how stress is affected by error, number of points, and number of dimensions, and indicate that stress and the “elbow” criterion are inadequate for purposes of identifying true dimensionality when there is error in the data.
Abstract: The study deals with the problem of determining true dimensionality of data-with-error scaled by Kruskal's multidimensional scaling technique. Artificial data was constructed for 6, 8, 12, 16, and 30 point configurations of 1, 2, or 3 true dimensions by adding varying amounts of error to the true distances. Results show how stress is affected by error, number of points, and number of dimensions, and indicate that stress and the “elbow” criterion are inadequate for purposes of identifying true dimensionality when there is error in the data. The Wagenaar-Padmos procedure for identifying true dimensionality and error level is discussed. A simplified technique, involving a measure calledConstraint, is suggested.

Journal ArticleDOI
TL;DR: In this paper, a simple procedure for testing heterogeneity of variance is developed which generalizes readily to complex, multi-factor experimental designs, such as three-way factorial designs.
Abstract: A simple procedure for testing heterogeneity of variance is developed which generalizes readily to complex, multi-factor experimental designs. Monte Carlo Studies indicate that the Z-variance test statistic presented here yields results equivalent to other familiar tests for heterogeneity of variance in simple one-way designs where comparisons are feasible. The primary advantage of the Z-variance test is in the analysis of factorial effects on sample variances in more complex designs. An example involving a three-way factorial design is presented.

Journal ArticleDOI
TL;DR: In this paper, a new criterion for rotation to an oblique simple structure is proposed and the results obtained are similar to that obtained by Cattell and Muerle's maxplane criterion.
Abstract: A new criterion for rotation to an oblique simple structure is proposed. The results obtained are similar to that obtained by Cattell and Muerle's maxplane criterion. Since the proposed criterion is smooth it is possible to locate the local maxima using simple gradient techniques. The results of the application of the Functionplane criterion to three sets of data are given. In each case a better fit to the subjective solution was obtained using the functionplane criterion than was reported for by Hakstian for the oblimax, promax, maxplane, or the Harris-Kaiser methods.

Journal ArticleDOI
TL;DR: In this paper, similarity judgments of three-dimensional stimuli were simulated, with the hypothetical subject attending to only some dimensions of stimulus variation (i.e., "subsampling") on each trial.
Abstract: Similarity judgments of three-dimensional stimuli were simulated, with the hypothetical subject attending to only some dimensions of stimulus variation (i.e., “subsampling”) on each trial. Recovery of the stimulus configuration by non-metric multidimensional scaling was investigated as a function of subsampling, the amount of random error in the judgments, and the number of stimuli being scaled.

Journal ArticleDOI
TL;DR: In this paper, maximum likelihood estimates of the free parameters, and an asymptotic likelihood-ratio test, are given for the hypothesis that one or more elements of a covariance matrix are zero, and/or that two or more of its elements are equal.
Abstract: Maximum likelihood estimates of the free parameters, and an asymptotic likelihood-ratio test, are given for the hypothesis that one or more elements of a covariance matrix are zero, and/or that two or more of its elements are equal. The theory applies immediately to a transformation of the covariance matrix by a known nonsingular matrix. Estimation is by Newton's method, starting conveniently from a closed-form least-squares solution.

Journal ArticleDOI
TL;DR: The hypothesis that situational variables may moderate match-mismatch effects, which is supported by the hypothesis that the outcome of match or mismatch is mediated by situational variables, was tested.
Abstract: Previous studies have shown that persons matched in level of differentiation are likely to develop greater interpersonal attraction in the course of an interaction than mismatched persons. These studies were all conducted in situations where the interacting persons were working toward a common goal. To test the hypothesis that situational variables may moderate match-mismatch effects, the present study investigated these effects when the interacting persons were in conflict. On the basis of their performance in tests of field-dependence-independence, Ss were selected as relatively high (Hi-Diff Ss) or relatively low (Lo-Diff Ss) in level of differentiation. Three kinds of dyads were composed–Hi-Diff/Hi-Diff, Lo-Diff/Lo-Diff, and Hi-Diff/Lo-Diff–and the task set for the dyad members was to reconcile conflict on an issue about which they were known to disagree. It was predicted that because of the more accommodating quality of Lo-Diff persons, dyads consisting of one or two such Ss would more often reconcile their disagreements and show greater interpersonal attraction than dyads consisting of two Hi-Diff Ss. Both predictions were confirmed, supporting the hypothesis that the outcome of match or mismatch is mediated by situational variables.

Journal ArticleDOI
TL;DR: A model or procedure which uses the information contained in the interaction between a person and an item to remove the effects of random guessing from estimates of ability, difficulty, and discrimination is presented.
Abstract: In latent trait models the standard procedure for handling the problem caused by guessing on multiple choice tests is to estimate a parameter which is intended to measure the “guessingness” inherent in an item. Birnbaum's three parameter model, which handles guessing in this manner, ignores individual differences in guessing tendency. This paper presents a model or procedure which uses the information contained in the interaction between a person and an item to remove the effects of random guessing from estimates of ability, difficulty, and discrimination. Simulated and real data are presented which support the model in terms of fit and information.

Journal ArticleDOI
TL;DR: General algorithms for computing the likelihood of any sequence generated by an absorbing Markov-chain are described, which enable an investigator to compute maximum likelihood estimates of parameters using unconstrained optimization techniques.
Abstract: General algorithms for computing the likelihood of any sequence generated by an absorbing Markov-chain are described These algorithms enable an investigator to compute maximum likelihood estimates of parameters using unconstrained optimization techniques The problem of parameter identifiability is reformulated into questions concerning the behavior of the likelihood function in the neighborhood of an extremum An alternative characterization of the concept of identifiability is proposed A procedure is developed for deciding whether or not this definition is satisfied

Journal ArticleDOI
TL;DR: In this paper, the relationship between Freeman's measure of association and the asymmetric association measures developed by Somers is described within the context of a contingency table, where the indices defined by Somers are usually used when the levels of both factors are ordered and one is assumed to be the independent factor.
Abstract: Within the context of a contingency table, this note describes the relationship between Freeman's measure of associationθ and the asymmetric association measures developed by Somers. Theθ coefficient is appropriate for a contingency table in which the levels of one factor are ordered and the levels of the other factor are unordered; the indices defined by Somers are usually used when the levels of both factors are ordered and one is assumed to be the independent factor.


Journal ArticleDOI
TL;DR: In this paper, a form of the solution which does not involve solution of an eigenvalue problem but does require an iteration similar to Browne's is proposed. But it suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed which deals with this case.
Abstract: Browne [1967] has given a method of solving the problem (originally stated by Mosier, [1939]) of finding a least squares fit to a specified factor structure. The problem is one of minimizing the sum of squared residuals of φ —FT with Diag (T'T)=I. Browne's solution involves the eigenvectors and values ofF'F and leads to an iterative solution. This paper gives a form of the solution which does not involve solution of an eigenvalue problem but does require an iteration similar to Browne's. It suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed which deals with this case. A better starting value for the iteration is also proposed for which convergence is guaranteed using the ordinary Newton iteration.

Journal ArticleDOI
TL;DR: The effects of various possible changes in the Scholastic Aptitude Test are explored and methods for designing and evaluating multilevel tests are described and illustrated.
Abstract: Some practical methods for redesigning a homogeneous test are described. The effects of various possible changes in the Scholastic Aptitude Test are explored. Methods for designing and evaluating multilevel tests are described and illustrated.

Journal ArticleDOI
TL;DR: In this article, the authors consider one class of multivariate matching methods which yield the same percent reduction in expected bias for each of the matching variables, and derive the expression for the maximum attainable percent reduction of bias given fixed distributions and fixed sample sizes.
Abstract: Matched sampling is a method of data collection designed to reduce bias and variability due to specific matching variables. Although often used to control for bias in studies in which randomization is practically impossible, there is virtually no statistical literature devoted to investigating the ability of matched sampling to control bias in the common case of many matching variables. An obvious problem in studying the multivariate matching situation is the variety of sampling plans, underlying distributions, and intuitively reasonable matching methods. This article considers one class of multivariate matching methods which yield the same percent reduction in expected bias for each of the matching variables. The primary result is the derivation of the expression for the maximum attainable percent reduction in bias given fixed distributions and fixed sample sizes. An examination of trends in this maximum leads to a procedure for estimating minimum ratios of sample sizes needed to obtain well-matched samples.