scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 1981"


Journal ArticleDOI
TL;DR: The Em procedure is shown to apply to general item-response models lacking simple sufficient statistics for ability, including models with more than one latent dimension, when computing procedures based on an EM algorithm are used.
Abstract: Maximum likelihood estimation of item parameters in the marginal distribution, integrating over the distribution of ability, becomes practical when computing procedures based on an EM algorithm are used By characterizing the ability distribution empirically, arbitrary assumptions about its form are avoided The Em procedure is shown to apply to general item-response models lacking simple sufficient statistics for ability This includes models with more than one latent dimension

2,137 citations


Journal ArticleDOI
TL;DR: Results indicated that a subset of internal criterion measures could be identified which appear to be valid indices of correct cluster recovery and could form the basis of a permutation test for the existence of cluster structure or a clustering algorithm.
Abstract: A Monte Carlo evaluation of thirty internal criterion measures for cluster analysis was conducted. Artificial data sets were constructed with clusters which exhibited the properties of internal cohesion and external isolation. The data sets were analyzed by four hierarchical clustering methods. The resulting values of the internal criteria were compared with two external criterion indices which determined the degree of recovery of correct cluster structure by the algorithms. The results indicated that a subset of internal criterion measures could be identified which appear to be valid indices of correct cluster recovery. Indices from this subset could form the basis of a permutation test for the existence of cluster structure or a clustering algorithm.

391 citations


Journal ArticleDOI
TL;DR: This paper presented an overview of an approach to the quantitative analysis of qualitative data with theoretical and methodological explanations of the two cornerstones of the approach, Alternating Least Squares and Optimal Scaling.
Abstract: This paper presents an overview of an approach to the quantitative analysis of qualitative data with theoretical and methodological explanations of the two cornerstones of the approach, Alternating Least Squares and Optimal Scaling. Using these two principles, my colleagues and I have extended a variety of analysis procedures originally proposed for quantitative (interval or ratio) data to qualitative (nominal or ordinal) data, including additivity analysis and analysis of variance; multiple and canonical regression; principal components; common factor and three mode factor analysis; and multidimensional scaling. The approach has two advantages: (a) If a least squares procedure is known for analyzing quantitative data, it can be extended to qualitative data; and (b) the resulting algorithm will be convergent. Three completely worked through examples of the additivity analysis procedure and the steps involved in the regression procedures are presented.

302 citations


Journal ArticleDOI
TL;DR: In this paper, a new method is proposed for a simultaneous factor analysis of dichotomous responses from several groups of individuals, making it possible to compare factor loading pattern, factor variances and covariances, and factor means over groups.
Abstract: A new method is proposed for a simultaneous factor analysis of dichotomous responses from several groups of individuals The method makes it possible to compare factor loading pattern, factor variances and covariances, and factor means over groups The method uses information from first and second order proportions and estimates the model by generalized least-squares Hypotheses regarding different degrees of invariance over groups may be evaluated by a large-sample chi-square test

231 citations


Journal ArticleDOI
TL;DR: In this paper, unbiased estimators were derived for an examinee's ability parameter and for his proportion-correct true score, for the variances of θ and θ across examinees in the group tested, and for the parallel-forms reliability of the maximum likelihood estimator.
Abstract: Given known item parameters, unbiased estimators are derived i) for an examinee's ability parameterθ and for his proportion-correct true scoreς, ii) for the variances ofθ andς across examinees in the group tested, and iii) for the parallel-forms reliability of the maximum likelihood estimator\(\hat \theta\).

174 citations


Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called unconditional (UML) and the conditional likelihood estimation equations in the dichotomous Rasch model are given.
Abstract: Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called “unconditional” (UML) and the “conditional” (CML) maximum-likelihood estimation equations in the dichotomous Rasch model are given. The basic critical condition is essentially the same for UML and CML estimation. For complete data matricesA, it is formulated both as a structural property ofA and in terms of the sufficient marginal sums. In case of incomplete data, the condition is equivalent to complete connectedness of a certain directed graph. It is shown how to apply the results in practical uses of the Rasch model.

144 citations


Journal ArticleDOI
Robert J. Boik1
TL;DR: In this paper, the validity conditions for univariate repeated measures designs are described and the effects of nonsphericity on test size and power are illustrated by means of a set of charts.
Abstract: The validity conditions for univariate repeated measures designs are described. Attention is focused on the sphericity requirement. For av degree of freedom family of comparisons among the repeated measures, sphericity exists when all contrasts contained in thev dimensional space have equal variances. Under nonsphericity, upper and lower bounds on test size and power of a priori, repeated measures,F tests are derived. The effects of nonsphericity are illustrated by means of a set of charts. The charts reveal that small departures from sphericity (.97 ≤ ɛ<1.00) can seriously affect test size and power. It is recommended that separate rather than pooled error term procedures be routinely used to test a priori hypotheses.

132 citations


Journal ArticleDOI
Yoshio Takane1
TL;DR: In this article, a single-step maximum likelihood estimation procedure for multidimensional scaling of dissimilarity data measured on rating scales is developed for multi-dimensional scaling data, which can fit the euclidian distance model to the data under various assumptions about category widths.
Abstract: A single-step maximum likelihood estimation procedure is developed for multidimensional scaling of dissimilarity data measured on rating scales. The procedure can fit the euclidian distance model to the data under various assumptions about category widths and under two distributional assumptions. The scoring algorithm for parameter estimation has been developed and implemented in the form of a computer program. Practical uses of the method are demonstrated with an emphasis on various advantages of the method as a statistical procedure.

92 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of deciding whether a set of mental test data is consistent with any one of a large class of item response models is considered, and sufficient conditions are derived for a LND item response model to fit a given set of data.
Abstract: The problem of deciding whether a set of mental test data is consistent with any one of a large class of item response models is considered. The “classical” assumption of locla independence is weakened to a new condition, local nonnegative dependence (LND). Necessary and sufficient conditions are derived for a LND item response model to fit a set of data. This leads to a condition that a set of data must satisfy if it is to be representable by any item response model that assumes both local independence and monotone item characteristic curves. An example is given to show that LND is strictly weaker than local independence. Thus rejection of LND models implies rejection of all item response models that assume local independence for a given set of data.

91 citations


Journal ArticleDOI
TL;DR: In this paper, confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis, and an iterative algorithm is developed to obtain the Bayes estimates.
Abstract: Confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis. An iterative algorithm is developed to obtain the Bayes estimates. A numerical example based on longitudinal data is presented. A simulation study is designed to compare the Bayesian approach with the maximum likelihood method.

79 citations


Journal ArticleDOI
TL;DR: In this paper, the convergence properties of several algorithms for computing the greatest lower bound to reliability or the constrained minimum trace communality solution in factor analysis have been examined, and it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data.
Abstract: In the last decade several algorithms for computing the greatest lower bound to reliability or the constrained minimum-trace communality solution in factor analysis have been developed. In this paper convergence properties of these methods are examined. Instead of using Lagrange multipliers a new theorem is applied that gives a sufficient condition for a symmetric matrix to be Gramian. Whereas computational pitfalls for two methods suggested by Woodhouse and Jackson can be constructed it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data. A uniqueness proof for the solution desired is offered.

Journal ArticleDOI
TL;DR: The approach is limited to the case of one-mode two-way proximity data, but could be extended in a relatively straightforward way to two- mode two- way, two-mode three-way or even three-modeThree-way data, under the assumption of such models as INDSCAL or the two or three- way unfolding models.
Abstract: A maximum likelihood estimation procedure is developed for multidimensional scaling when (dis)similarity measures are taken by ranking procedures such as the method of conditional rank orders or the method of triadic combinations. The central feature of these procedures may be termed directionality of ranking processes. That is, rank orderings are performed in a prescribed order by successive first choices. Those data have conventionally been analyzed by Shepard-Kruskal type of nonmetric multidimensional scaling procedures. We propose, as a more appropriate alternative, a maximum likelihood method specifically designed for this type of data. A broader perspective on the present approach is given, which encompasses a wide variety of experimental methods for collecting dissimilarity data including pair comparison methods (such as the method of tetrads) and the pick-M method of similarities. An example is given to illustrate various advantages of nonmetric maximum likelihood multidimensional scaling as a statistical method. At the moment the approach is limited to the case of one-mode two-way proximity data, but could be extended in a relatively straightforward way to two-mode two-way, two-mode three-way or even three-mode three-way data, under the assumption of such models as INDSCAL or the two or three-way unfolding models.

Journal ArticleDOI
TL;DR: In this paper, an expression is given for weighted least squares estimators of oblique common factors, constrained to have the same covariance matrix as the factors they estimate, and a proof of uniqueness is given.
Abstract: An expression is given for weighted least squares estimators of oblique common factors, constrained to have the same covariance matrix as the factors they estimate. It is shown that if as in exploratory factor analysis, the common factors are obtained by oblique transformation from the Lawley-Rao basis, the constrained estimators are given by the same transformation. Finally a proof of uniqueness is given.

Journal ArticleDOI
TL;DR: In this paper, redundancy analysis and multivariate multiple linear regression (MMLR) were investigated for canonical correlation and orthogonal rotation of the components, and the solution was shown to be identical to van den Wollenberg's maximum redundancy solution.
Abstract: This paper attempts to clarify the nature of redundancy analysis and its relationships to canonical correlation and multivariate multiple linear regression. Stewart and Love introduced redundancy analysis to provide non-symmetric measures of the dependence of one set of variables on the other, as channeled through the canonical variates. Van den Wollenberg derived sets of variates which directly maximize the between set redundancy. Multivariate multiple linear regression on component scores (such as principal components) is considered. The problem is extended to include an orthogonal rotation of the components. The solution is shown to be identical to van den Wollenberg's maximum redundancy solution.

Journal ArticleDOI
TL;DR: In this paper, an extension of Wollenberg's redundancy analysis is proposed to derive Y-variates corresponding to the optimal X -variates, and these variates are maximally correlated with the given X-variate, depending upon the standardization chosen they also have certain properties of orthogonality.
Abstract: An extension of Wollenberg's redundancy analysis is proposed to deriveY-variates corresponding to the optimalX-variates. These variates are maximally correlated with the givenX-variates, and depending upon the standardization chosen they also have certain properties of orthogonality.

Journal ArticleDOI
Ingram Olkin1
TL;DR: In this paper, it was shown that for a trivariate distribution if two correlations are fixed the remaining one is constrained, and that if one correlation is fixed, then the remaining two are constrained.
Abstract: It is well-known that for a trivariate distribution if two correlations are fixed the remaining one is constrained. Indeed, if one correlation is fixed, then the remaining two are constrained. Both results are extended to the case of a multivariate distribution. The results are applied to some special patterned matrices.

Journal ArticleDOI
TL;DR: A recursive dynamic programming strategy is discussed for optimally reorganizing the rows and simultaneously the columns of ann ×n proximity matrix when the objective function measuring the adequacy of a reorganization has a fairly simple additive structure.
Abstract: A recursive dynamic programming strategy is discussed for optimally reorganizing the rows and simultaneously the columns of ann ×n proximity matrix when the objective function measuring the adequacy of a reorganization has a fairly simple additive structure. A number of possible objective functions are mentioned along with several numerical examples using Thurstone's paired comparison data on the relative seriousness of crime. Finally, the optimization tasks we propose to attack with dynamic programming are placed in a broader theoretical context of what is typically referred to as the quadratic assignment problem and its extension to cubic assignment.

Journal ArticleDOI
TL;DR: In this article, Limitations and extensions of Feldt's approach to test the equality of Cronbach's alpha coefficients in independent and matched samples are discussed. In particular, this approach is used to test equality of intraclass correlation coefficients.
Abstract: Limitations and extensions of Feldt's approach to testing the equality of Cronbach's alpha coefficients in independent and matched samples are discussed. In particular, this approach is used to test equality of intraclass correlation coefficients.

Journal ArticleDOI
TL;DR: In this article, pairwise preference data are represented as a monotone integral transformation of difference on the underlying stimulus-object or utility scale, and the parameters of the transformation and the underlying scale values or utilities are estimated by maximum likelihood with inequality constraints on the transformation parameters.
Abstract: Pairwise preference data are represented as a monotone integral transformation of difference on the underlying stimulus-object or utility scale. The class of monotone transformations considered is that in which the kernel of the integral is a linear combination of B-splines. Two types of data are analyzed: binary and continuous. The parameters of the transformation and the underlying scale values or utilities are estimated by maximum likelihood with inequality constraints on the transformation parameters. Various hypothesis tests and interval estimates are developed. Examples of artificial and real data are presented.

Journal ArticleDOI
TL;DR: The statement of the conditions is improved and shown to provide necessary and sufficient conditions for a hierarchical clustering strategy to be monotonic.
Abstract: Milligan presented the conditions that are required for a hierarchical clustering strategy to be monotonic, based on a formula by Lance and Williams. In the present paper, the statement of the conditions is improved and shown to provide necessary and sufficient conditions.

Journal ArticleDOI
TL;DR: In this paper, the asymptotic joint distribution of all 2 k − 1 squared multiple correlations is derived and a corollary is obtained as a special case of commonality components.
Abstract: Commonality components have been defined as a method of partitioning squared multiple correlations. In this paper, the asymptotic joint distribution of all 2 k − 1 squared multiple correlations is derived. The asymptotic joint distribution of linear combinations of squared multiple correlations is obtained as a corollary. In particular, the asymptotic joint distribution of commonality components are derived as a special case. Simultaneous and nonsimultaneous asymptotic confidence intervals for commonality components can be obtained from this distribution.

Journal ArticleDOI
TL;DR: In this article, a linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished, and procedures are described for obtaining optimal cutting scores in sub-populations in quota-free as well as quota-restricted selection situations.
Abstract: A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting scores are optimal in the sense that they maximize the overall expected utility of the selection process. The procedures are demonstrated with empirical data.

Journal ArticleDOI
TL;DR: In this paper, the effect of unreliable criterion scores on the optimal decision rule is examined, and it is illustrated how qualitative information can be combined with aptitude measurements to improve treatment assignment decisions.
Abstract: For assigning subjects to treatments the point of intersection of within-group regression lines is ordinarily used as the critical point. This decision rule is critized and, for several utility functions and any number of treatments, replaced by optimal monotone, nonrandomized (Bayes) rules. Both treatments with and without mastery scores are considered. Moreover, the effect of unreliable criterion scores on the optimal decision rule is examined, and it is illustrated how qualitative information can be combined with aptitude measurements to improve treatment assignment decisions. Although the models in this paper are presented with special reference to the aptitude-treatment interaction problem in education, it is indicated that they apply to a variety of situations in which subjects are assigned to treatments on the basis of some predictor score, as long as there are no allocation quota considerations.

Journal ArticleDOI
TL;DR: The developed model, which is an extension of Tucker's three-mode factor analytic model, allows for the simultaneous analysis of all modes of a four-mode data matrix and the consideration of relationships among the modes.
Abstract: A model for four-mode component analysis is developed and presented. The developed model, which is an extension of Tucker's three-mode factor analytic model, allows for the simultaneous analysis of all modes of a four-mode data matrix and the consideration of relationships among the modes. An empirical example based upon viewer perceptions of repetitive advertising shows the four-mode model applicable to real data.

Journal ArticleDOI
Wayne S. DeSarbo1
TL;DR: The canonical correlation is the maximum correlation between linear functions (canonical factors) of the two sets of variables of the same set of measurements made on the same subjects as mentioned in this paper, and the redundancy measure, developed by Stewart and Love [1968], is a redundancy measure that measures the redundancy of a set of variables.
Abstract: The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation betweenlinear functions (canonical factors) of the two sets of variables. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a full-fledged matrix derivation of Sherin's matrix formulation of Kaiser's varimax criterion using matrix differential calculus in conjunction with the Hadamard (or Schur) matrix product.
Abstract: The author provides a full-fledged matrix derivation of Sherin's matrix formulation of Kaiser's varimax criterion. He uses matrix differential calculus in conjunction with the Hadamard (or Schur) matrix product. Two results on Hadamard products are presented.

Journal ArticleDOI
TL;DR: This paper showed that a particular class of multivariate stochastic processes, namely those associated with the Marshall-Olkin multivariate exponential distribution, generates several of these models, such as Tversky's elimination by aspects model, Edgell and Geisler's additive random aspects model and Shepard and Arabie's additive cluster model.
Abstract: Various recent works have developed “feature” or “aspect” models of similarity and preference These models are more concerned with the fine detail of the judgment process than were prior models, but nevertheless they have not in general developed an underlying stochastic process compatible with the assumed structure In this paper, we show that a particular class of multivariate stochastic processes, namely those associated with the Marshall-Olkin multivariate exponential distribution, generates several of these models In particular, such stochastic processes (appropriately interpreted) yield Tversky's elimination by aspects model, Edgell and Geisler's (normal) additive random aspects model, and Shepard and Arabie's additive cluster model

Journal ArticleDOI
TL;DR: The purpose of this paper is to elaborate on the consequences of assuming a variable excitatory state and to formulate the concomitant model.
Abstract: The Triangular Constant Method was designed for the measurement of discriminability between sensory stimuli. Its original model assumes a steady excitatory detection state. The purpose of this paper is to elaborate on the consequences of assuming a variable excitatory state and to formulate the concomitant model.

Journal ArticleDOI
TL;DR: A simple method to treat the so-called Heywood case in the minres method in factor analysis requiring less computing time and enjoying higher numerical stability is described.
Abstract: The method proposed by Harman and Fukuda to treat the so-called Heywood case in the minres method in factor analysis i.e., the case where the resulting communalities are greater than one, involves the frequent solution of eigenvalue problems. A simple method to treat this problem requiring less computing time and enjoying higher numerical stability is described in this paper.

Journal ArticleDOI
TL;DR: In this paper, asymptotic distribution theory of Brogden's form of biserial correlation coefficient was derived and large sample estimates of its standard error was obtained, and other modifications of the statistic were evaluated, and on the basis of these results, recommendations for choice of estimator of BCS were presented.
Abstract: Asymptotic distribution theory of Brogden's form of biserial correlation coefficient is derived and large sample estimates of its standard error obtained. Its efficiency relative to the biserial correlation coefficient is examined. Other modifications of the statistic are evaluated, and on the basis of these results, recommendations for choice of estimator of biserial correlation are presented.