scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 1990"


Journal ArticleDOI
TL;DR: In this paper, a statistical model containing individual parameters and a structure on both the first and second moments of the random variables reflecting growth is presented as a method for representing development, and a numerical illustration using data initially collected by Nesselroade and Baltes is presented.
Abstract: As a method for representing development, latent trait theory is presented in terms of a statistical model containing individual parameters and a structure on both the first and second moments of the random variables reflecting growth. Maximum likelihood parameter estimates and associated asymptotic tests follow directly. These procedures may be viewed as an alternative to standard repeated measures ANOVA and to first-order auto-regressive methods. As formulated, the model encompasses cohort sequential designs and allow for period or practice effects. A numerical illustration using data initially collected by Nesselroade and Baltes is presented.

1,379 citations


Journal ArticleDOI
TL;DR: In this paper, it is argued that the usual assumption of local independence is replaced by a weaker assumption, essential independence, which implies the existence of a unique unidimensional latent ability, which is equivalent to the consistent estimation of this latent ability in an ordinal scaling sense using anyBalanced empirical scaling.
Abstract: Using an infinite item test framework, it is argued that the usual assumption of local independence be replaced by a weaker assumption,essential independence. A fortiori, the usual assumption of unidimensionality is replaced by a weaker and arguably more appropriatestatistically testable assumption of essential unidimennsionality. Essential unidimennsionnality implies the existence of a “unique” unidimensional latent ability. Essential unidimensionality is equivalent to the “consistent” estimation of this latnet ability in an ordinal scaling sense using anyBalanced empirical scaling. A variation of this estimation approach allows consistent estimation of ability on the given latent ability scale.

393 citations


Journal ArticleDOI
TL;DR: The authors used substantive theory to differentiate the likelihoods of response vectors under a fixed set of strategies, and model response probabilities in terms of item parameters for each strategy, proportions of subjects employing each strategy and distributions of subject proficiency within strategies.
Abstract: A model is presented for item responses when different subjects employ different strategies, but only responses, not choice of strategy, can be observed. Using substantive theory to differentiate the likelihoods of response vectors under a fixed set of strategies, we model response probabilities in terms of item parameters for each strategy, proportions of subjects employing each strategy, and distributions of subject proficiency within strategies. The probabilities that an individual subject employed the various strategies can then be obtained, along with a conditional estimate of proficiency under each. A conceptual example discusses response strategies for spatial rotation tasks, and a numerican example resolves a population of subjects into subpopulations of valid responders and random guessers.

285 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider total and direct effects in linear structural equation models and conclude that the results in the literature for the total effects of dependent variables on other dependent variables are not equilibrium multipliers, and thus the usual interpretation is incorrect.
Abstract: This paper considers total and direct effects in linear structural equation models. Adopting a causal perspective that is implicit in much of the literature on the subject, the paper concludes that in many instances the effects do not admit the interpretations imparted in the literature. Drawing a distinction between concomitants and factors, the paper concludes that a concomitant has neither total nor direct effects on other variables. When a variable is a factor and one or more intervening variables are concomitants, the notion of a direct effect is not causally meaningful. Even when the notion of a direct effect is meaningful, the usual estimate of this quantity may be inappropriate. The total effect is usually interpreted as an equilibrium multiplier. In the case where there are simultaneity relations among the dependent variables in tghe model, the results in the literature for the total effects of dependent variables on other dependent variables are not equilibrium multipliers, and thus, the usual interpretation is incorrect. To remedy some of these deficiencies, a new effect, the total effect of a factorX on an outcomeY, holding a set of variablesF constant, is defined. When defined, the total and direct effects are a special case of this new effect, and the total effect of a dependent variable on a dependent variable is an equilibrium multiplier.

271 citations


Journal ArticleDOI
TL;DR: In this paper, the applicability of the large sample theory to maximum likelihood estimates of total indirect effects in sample sizes of 50, 100, 200, 400, and 800 was examined using Monte Carlo methods and the results suggest that sample szes of 200 or more and 400 or more are required for models such as Model 1 and Model 2, respectively.
Abstract: The large sample distribution of total indirect effects in covariance structure models in well known. Using Monte Carlo methods, this study examines the applicability of the large sample theory to maximum likelihood estimates oftotal indirect effects in sample sizes of 50, 100, 200, 400, and 800. Two models are studied. Model 1 is a recursive model with observable variables and Model 2 is a nonrecursive model with latent variables. For the large sample theory to apply, the results suggest that sample szes of 200 or more and 400 or more are required for models such as Model 1 and Model 2, respectively.

191 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended the biplot technique to canonical correlation analysis and redundancy analysis, and the plot of structure correlations was shown to be the optimal for displaying the pairwise correlations between the variables of the one set and those of the second.
Abstract: This paper extends the biplot technique to canonical correlation analysis and redundancy analysis. The plot of structure correlations is shown to the optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biplots may be interpreted.

173 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the alternative model in which a small minority of persons has an answer strategy not described by the Rasch model and classified each respondent as either anomalous or conforming to the model.
Abstract: This paper deals with the situation of an investigator who has collected the scores ofn persons to a set ofk dichotomous items, and wants to investigate whether the answers of all respondents are compatible with the one parameter logistic test model of Rasch. Contrary to the standard analysis of the Rasch model, where all persons are kept in the analysis and badly fittingitems may be removed, this paper studies the alternative model in which a small minority ofpersons has an answer strategy not described by the Rasch model. Such persons are called anomalous or aberrant. From the response vectors consisting ofk symbols each equal to 0 or 1, it is desired to classify each respondent as either anomalous or as conforming to the model. As this model is probabilistic, such a classification will possibly involve false positives and false negatives. Both for the Rasch model and for other item response models, the literature contains several proposals for a person fit index, which expresses for each individual the plausibility that his/her behavior follows the model. The present paper argues that such indices can only provide a satisfactory solution to the classification problem if their statistical distribution is known under the null hypothesis that all persons answer according to the model. This distribution, however, turns out to be rather different for different values of the person's latent trait value. This value will be called “ability parameter”, although our results are equally valid for Rasch scales measuring other attributes.

166 citations


Journal ArticleDOI
TL;DR: Item response theory (IT) models are now in common use for the analysis of dichotomous item responses and as discussed by the authors examines the sampling theory foundations for statistical inference in these models.
Abstract: Item response theory (IT) models are now in common use for the analysis of dichotomous item responses. This paper examines the sampling theory foundations for statistical inference in these models. The discussion includes: some history on the “stochastic subject” versus the random sampling interpretations of the probability in IRT models; the relationship between three versions of maximum likelihood estimation for IRT models; estimating θ versus estimating θ-predictors; IRT models and loglinear models; the identifiability of IRT models; and the role of robustness and Bayesian statistics from the sampling theory perspective.

162 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared the conventional method of measuring ability, which is based on items with assumed true parameter values obtained from a pretest, compared to a Bayesian method that deals with the uncertainties of such items.
Abstract: The conventional method of measuring ability, which is based on items with assumed true parameter values obtained from a pretest, is compared to a Bayesian method that deals with the uncertainties of such items. Computational expressions are presented for approximating the posterior mean and variance of ability under the three-parameter logistic (3PL) model. A 1987 American College Testing Program (ACT) math test is used to demonstrate that the standard practice of using maximum likelihood or empirical Bayes techniques may seriously underestimate the uncertainty in estimated ability when the pretest sample is only moderately large.

143 citations


Journal ArticleDOI
TL;DR: In this article, a three-stage estimator of the thresholds and the covariance structure parameters, based on partition maximum likelihood and generalized least squares estimation, is proposed for structural equation models with polytomous variables.
Abstract: This paper is concerned with the analysis of structural equation models with polytomous variables. A computationally efficient three-stage estimator of the thresholds and the covariance structure parameters, based on partition maximum likelihood and generalized least squares estimation, is proposed. An example is presented to illustrate the method.

85 citations


Journal ArticleDOI
TL;DR: The Dutch identity as mentioned in this paper is a useful way to reexpress the basic equations of item response models that relate the manifest probabilities to the item response functions (IRFs) and the latent trait distribution.
Abstract: The Dutch Identity is a useful way to reexpress the basic equations of item response models that relate the manifest probabilities to the item response functions (IRFs) and the latent trait distribution. The identity may be exploited in several ways. For example: (a) to suggest how item response models behave for large numbers of items—they are approximate submodels of second-order loglinear models for 2 J tables; (b) to suggest new ways to assess the dimensionality of the latent trait—principle components analysis of matrices composed of second-order interactions from loglinear models; (c) to give insight into the structure of latent class models; and (d) to illuminate the problem of identifying the IRFs and the latent trait distribution from sample data.

Journal ArticleDOI
TL;DR: In this article, the LISREL model was introduced and two goodness-of-fit indices, GFI and AGFI, were proposed, and their asymptotic distributions and some statistical properties were discussed.
Abstract: In introducing the LISREL model for systems of linear structural equations, Joreskog and Sorbom proposed two goodness-of-fit indices, GFI and AGFI. Their asymptotic distributions and some statistical properties are discussed.

Journal ArticleDOI
TL;DR: In this article, the composite direct product model for the multitrait-multimethod matrix is reparameterized as a second-order factor analysis model, which facilitates the use of widely available computer programs such as LISREL and LISCOMP for fitting the model.
Abstract: The composite direct product model for the multitrait-multimethod matrix is reparameterized as a second-order factor analysis model. This facilitates the use of widely available computer programs such as LISREL and LISCOMP for fitting the model.

Journal ArticleDOI
TL;DR: In this article, a generalized least squares approach is presented for incorporating linear constraints on the standardized row and column scores obtained from a canonical analysis of a contingency table, which is easy to implement and may simplify considerably the interpretation of a data matrix.
Abstract: A generalized least squares approach is presented for incorporating linear constraints on the standardized row and column scores obtained from a canonical analysis of a contingency table. The method is easy to implement and may simplify considerably the interpretation of a data matrix. The approach is compared to a restricted maximum likelihood procedure.

Journal ArticleDOI
TL;DR: The PARELLA model as mentioned in this paper is a probabilistic parallelogram model that can be used for the measurement of latent attitudes or latent preferences, and the data analyzed are the dichotomous responses of persons to stimuli, with a one indicating agreement (disagreement) with the content of the stimulus.
Abstract: The PARELLA model is a probabilistic parallelogram model that can be used for the measurement of latent attitudes or latent preferences. The data analyzed are the dichotomous responses of persons to stimuli, with a one (zero) indicating agreement (disagreement) with the content of the stimulus. The model provides a unidimensional representation of persons and items. The response probabilities are a function of the distance between person and stimulus: the smaller the distance, the larger the probability that a person will agree with the content of the stimulus. An estimation procedure based on expectation maximization and marginal maximum likelihood is developed and the quality of the resulting parameter estimates evaluated.

Journal ArticleDOI
TL;DR: In this article, information functions are used to find the optimum ability levels and maximum contributions to information for estimating item parameters in three commonly used logistic item response models, and examinees who contribute maximally to the estimation of item difficulty contribute little to the estimations of item discrimination, suggesting that in applications that depend heavily upon the veracity of individual item parameter estimates, better item calibration results may be obtained (for fixed sample sizes) from examinee calibration samples in which ability is widely dispersed.
Abstract: Information functions are used to find the optimum ability levels and maximum contributions to information for estimating item parameters in three commonly used logistic item response models. For the three and two parameter logistic models, examinees who contribute maximally to the estimation of item difficulty contribute little to the estimation of item discrimination. This suggests that in applications that depend heavily upon the veracity of individual item parameter estimates (e.g. adaptive testing or text construction), better item calibration results may be obtained (for fixed sample sizes) from examinee calibration samples in which ability is widely dispersed.

Journal ArticleDOI
TL;DR: It is shown that the parameter set that minimizes the majorizing function also decreases the matrix trace function, which in turn provides a monotonically convergent algorithm for minimizing the matrix Trace function iteratively.
Abstract: The problem of minimizing a general matrix, trace function, possibly subject to certain constraints, is approached by means of majorizing this function by one having a simple quadratic shape and whose minimum is easily found. It is shown that the parameter set that minimizes the majorizing function also decreases the matrix trace function, which in turn provides a monotonically convergent algorithm for minimizing the matrix trace function iteratively. Three algorithms based on majorization for solving certain least squares problems are shown to be special cases. In addition, by means of several examples, it is noted how algorithms may be provided for a wide class of statistical optimization tasks for which no satisfactory algorithms seem available.

Journal ArticleDOI
TL;DR: Relations are examined between latent trait and latent class models for item response data and methods are presented for relating latentclass models to factor models for dichotomized variables.
Abstract: Relations are examined between latent trait and latent class models for item response data. Conditions are given for the two-latent class and two-parameter normal ogive models to agree, and relations between their item parameters are presented. Generalizationss are then made to continuous models with more than one latent trait and discrete models with more than two latent classes, and methods are presented for relating latent class models to factor models for dichotomized variables. Results are illustrated using data from the Law School Admission Test, previously analyzed by several authors.

Journal ArticleDOI
TL;DR: In this article, a statistical model for traditional networks in which relations are measured within a group is presented, illustrated on a sample data set, and compared to its traditional counterpart, including those that model multivariate relations simultaneously, and those that allow for the inclustion of attributes of the individuals in the group.
Abstract: Traditional network research analyzes relational ties within a single group of actors: the models presented in this paper involve relational ties exist beteen two distinct sets of actors. Statistical models for traditional networks in which relations are measured within a group simplify when modeling unidirectional relations measured between groups. The traditional paradigm results in a one-mode socionatrix; the network paradigm considered in this paper results in a two-mode socionatrix; A statistical model is presented, illustrated on a sample data set, and compared to its traditional counterpart. Extensions are discussed, including those that model multivariate relations simultaneously, and those that allow for the inclustion of attributes of the individuals in the group.

Journal ArticleDOI
TL;DR: In this article, a general solution for weighted orthonormal Procrustes problem is proposed in terms of the least squares criterion for the two-demensional case, which always gives the global minimum; for the general case, an algorithm is proposed that must converge, although not necessarily to a global minimum.
Abstract: A general solution for weighted orthonormal Procrustes problem is offered in terms of the least squares criterion. For the two-demensional case. this solution always gives the global minimum; for the general case, an algorithm is proposed that must converge, although not necessarily to the global minimum. In general, the algorithm yields a solution for the problem of how to fit one matrix to another under the condition that the dimensions of the latter matrix first are allowed to be transformed orthonormally and then weighted differentially, which is the task encountered in fitting analogues of the IDIOSCAL and INDSCAL models to a set of configurations.

Journal ArticleDOI
TL;DR: In this paper, a generalized Mallows' model for ranking is proposed and the posterior probabilities for the consensus ordering of the items are derived for a given set of items in a set of rankings.
Abstract: In the situation where subjects independently rank order a fixed set of items, the idea of a consensus ordering of the items is defined and employed as a parameter in a class of probability models for rankings. In the context of such models, which generalize those of Mallows, posterior probabilities may be easily formed about the population consensus ordering. An example of rankings obtained by the Graduate Record Examination Board is presented to demonstrate the adequacy of these generalized Mallows' models for describing actual data sets of rankings and to illustrate convenient summaries of the posterior probabilities for the consensus ordering.

Journal ArticleDOI
TL;DR: In this paper, the squared error loss function for the unidimensional metric scaling problem has a special geometry and it is possible to efficiently find the global minimum for every coordinate conditioned on every other coordinate being held fixed.
Abstract: The squared error loss function for the unidimensional metric scaling problem has a special geometry. It is possible to efficiently find the global minimum for every coordinate conditioned on every other coordinate being held fixed. This approach is generalized to the case in which the coordinates are polynomial functions of exogenous variables. The algorithms shown in the paper are linear in the number of parameters. They always descend and, at convergence, every coefficient of every polynomial is at its global minimum conditioned on every other parameter being held fixed. Convergence is very rapid and Monte Carlo tests show the basic procedure almost always converges to the overall global minimum.

Journal ArticleDOI
TL;DR: In this article, a nonspatial operationalization of the Krumhansl distancedensity model of similarity is presented, which assumes that the similarity between two objects is a function of both the interpoint distance between i andj and the density of other stimulus points in the regions surroundingi andj.
Abstract: This paper presents a nonspatial operationalization of the Krumhansl (1978, 1982) distancedensity model of similarity. This model assumes that the similarity between two objectsi andj is a function of both the interpoint distance betweeni andj and the density of other stimulus points in the regions surroundingi andj. We review this conceptual model and associated empirical evidence for such a specification. A nonspatial, tree-fitting methodology is described which is sufficiently flexible to fit a number of competing hypotheses of similarity formation. A sequential, unconstrained minimization algorithm is technically presented together with various program options. Three applications are provided which demonstrate the flexibility of the methodology. Finally, extensions to spatial models, three-way analyses, and hybrid models are discussed.

Journal ArticleDOI
TL;DR: In this article, the effects of some common forms of violation of these assumptions on the estimation of the unrestricted correlation can be investigated, and simple expressions are derived for both the restricted and corrected correlations in terms of the target (unrestricted) correlation in these situations.
Abstract: Corrections for restriction in range due to explicit selection assume the linearity of regression and homoscedastic array variances. This paper develops a theoretical framework in which the effects of some common forms of violation of these assumptions on the estimation of the unrestricted correlation can be investigated. Simple expressions are derived for both the restricted and corrected correlations in terms of the target (unrestricted) correlation in these situations.

Journal ArticleDOI
TL;DR: An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices and it is suggested to choose as starting configurations for the algorithm those configurations that yield closed-form solutions in some special cases.
Abstract: An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the generalized algorithm, a modification of Takane's algorithm is suggested such that this modified algorithm converges monotonically. It is suggested to choose as starting configurations for the algorithm those configurations that yield closed-form solutions in some special cases. Finally, a sufficient condition is described for monotonic convergence of Takane's original algorithm.

Journal ArticleDOI
TL;DR: In this article, the extact variance and asymptotic distribution of the average ridit is developed, including the cases in which the reference group is sampled or the comparison group is finite.
Abstract: Ridit analysis is statistical method for comparing ordinal-scale responses. In this paper, the extact variance and asymptotic distribution of the average ridit is developed, including the cases in which the reference group is sampled or the comparison group is finite. The appropriate use and interpretation of ridit analysis is also discussed.

Journal ArticleDOI
TL;DR: In this article, a Bayes estimation procedure is introduced that allows the nature and strength of prior beliefs to be easily specified and modal posterior estimates to be obtained as easily as maximum likelihood estimates.
Abstract: A Bayes estimation procedure is introduced that allows the nature and strength of prior beliefs to be easily specified and modal posterior estimates to be obtained as easily as maximum likelihood estimates. The procedure is based on constructing posterior distributions that are formally identical to likelihoods, but are based on sampled data as well as artificial data reflecting prior information. Improvements in performance of modal Bayes procedures relative to maximum likelihood estimation are illustrated for Rasch-type models. Improvements range from modest to dramatic, depending on the model and the number of items being considered.

Journal ArticleDOI
TL;DR: A formal analysis is made of how to project an attribute criterion into the hierarchical classes model for object by attribute data proposed by De Boeck and Rosenberg to demonstrate the usefulness of the logical strategies and to show the complementarity of logical and probabilistic approaches.
Abstract: A formal analysis is made of how to project an attribute criterion into the hierarchical classes model for object by attribute data proposed by De Boeck and Rosenberg The projection is conceptualized as the prediction of the attribute criterion by means of a logical rule defined on the basis of attribute combinations from the model Eliminative and constructive strategies are proposed to find logical rules with maximal predictive power and minimal formula complexity Logical analyses of a real data set are reported and compared with a logistic regression to demonstrate the usefulness of the logical strategies, and to show the complementarity of logical and probabilistic approaches

Journal ArticleDOI
Yutaka Kano1
TL;DR: In this article, the authors investigated the relationship between improper solutions and the number of factors and discussed the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis.
Abstract: Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment

Journal ArticleDOI
TL;DR: In this article, it was shown that the sstress criterion leads to a similar weighted least squares solution withw=(n+2)/4; the main result remains true for the problem of approximating a given n×n matrix with a zero diagonal by a squared distance matrix.
Abstract: We examine the least squares approximationC to a symmetric matrixB, when all diagonal elements get weightw relative to all nondiagonal elements. WhenB has positivityp andC is constrained to be positive semi-definite, our main result states that, whenw≥1/2, then the rank ofC is never greater thanp, and whenw≤1/2 then the rank ofC is at leastp. For the problem of approximating a givenn×n matrix with a zero diagonal by a squared-distance matrix, it is shown that the sstress criterion leads to a similar weighted least squares solution withw=(n+2)/4; the main result remains true. Other related problems and algorithmic consequences are briefly discussed.