scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 2001"


Journal ArticleDOI
TL;DR: In this paper, Satorra and Bentler's scaling corrections are used to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data.
Abstract: A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model sayM 0 implies on a less restricted oneM 1. IfT 0 andT 1 denote the goodness-of-fit test statistics associated toM 0 andM 1, respectively, then typically the differenceT d =T 0−T 1 is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the modelsM 0 andM 1. As in the case of the goodness-of-fit test, it is of interest to scale the statisticT d in order to improve its chi-square approximation in realistic, that is, nonasymptotic and nonormal, applications. In a recent paper, Satorra (2000) shows that the difference between two SB scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are not available in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of modelsM 0 andM 1. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

4,011 citations


Journal ArticleDOI
TL;DR: A two-level regression model is imposed on the ability parameters in an item response theory (IRT) model and it will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling.
Abstract: In this article, a two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement error. Another advantage is that, contrary to observed scores, latent scores are test-independent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The two-parameter normal ogive model is used for the IRT measurement model. It will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.

385 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the asymptotic and finite sample performance of different factor score regression methods for structural equation models with latent variables and show that the conventional approach performs very badly.
Abstract: Structural equation models with latent variables are sometimes estimated using an intuitive three-step approach, here denoted factor score regression. Consider a structural equation model composed of an explanatory latent variable and a response latent variable related by a structural parameter of scientific interest. In this simple example estimation of the structural parameter proceeds as follows: First, common factor models areseparately estimated for each latent variable. Second, factor scores areseparately assigned to each latent variable, based on the estimates. Third, ordinary linear regression analysis is performed among the factor scores producing an estimate for the structural parameter. We investigate the asymptotic and finite sample performance of different factor score regression methods for structural equation models with latent variables. It is demonstrated that the conventional approach to factor score regression performs very badly. Revised factor score regression, using Regression factor scores for the explanatory latent variables and Bartlett scores for the response latent variables, produces consistent estimators for all parameters.

339 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented.
Abstract: A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented. The procedure is a generalization of a procedure by Albert (1992) for estimating the two-parameter normal ogive model. The procedure supports analyzing data from multiple populations and incomplete designs. It is shown that restrictions can be imposed on the factor matrix for testing specific hypotheses about the ability structure. The technique is illustrated using simulated and real data.

332 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct, which are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.
Abstract: While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.

260 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented methods for modeling attribute measures in terms of network ties, and so constructp* models for the patterns of social influence within a network, and compared these models with existing network effects models.
Abstract: This paper generalizes thep* class of models for social network data to predict individual-level attributes from network ties. Thep* model for social networks permits the modeling of social relationships in terms of particular local relational or network configurations. In this paper we present methods for modeling attribute measures in terms of network ties, and so constructp* models for the patterns of social influence within a network. Attribute variables are included in a directed dependence graph and the Hammersley-Clifford theorem is employed to derive probability models whose parameters can be estimated using maximum pseudo-likelihood. The models are compared to existing network effects models. They can be interpreted in terms of public or private social influence phenomena within groups. The models are illustrated by an empirical example involving a training course, with trainees' reactions to aspects of the course found to relate to those of their network partners.

194 citations


Journal ArticleDOI
TL;DR: A cluster analysis of real-world financial services data revealed that using the variable-selection heuristic prior to the K-means algorithm resulted in greater cluster stability, indicating the heuristic is extremely effective at eliminating masking variables.
Abstract: One of the most vexing problems in cluster analysis is the selection and/or weighting of variables in order to include those that truly define cluster structure, while eliminating those that might mask such structure. This paper presents a variable-selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. The heuristic was subjected to Monte Carlo testing across more than 2200 datasets with known cluster structure. The results indicate the heuristic is extremely effective at eliminating masking variables. A cluster analysis of real-world financial services data revealed that using the variable-selection heuristic prior to the K-means algorithm resulted in greater cluster stability.

131 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic null distribution is derived for statistics which are linear in the item responses, and in which the ability parameter is replaced by an estimate, which allows the standardization of linear person fit statistics with estimated ability parameter.
Abstract: Person fit statistics are considered for dichotomous item response models. The asymptotic null distribution is derived for statistics which are linear in the item responses, and in which the ability parameter is replaced by an estimate. This allows the asymptotically correct standardization of linear person fit statistics with estimated ability parameter. The fact that the ability parameter is estimated usually decreases the asymptotic variance.

126 citations


Journal ArticleDOI
TL;DR: Empirical examples show that the modified algorithm can be reasonably fast, but its purpose is to save an investigator's effort rather than that of his or her computer, making it more appropriate as a research tool than as an algorithm for established methods.
Abstract: A very general algorithm for orthogonal rotation is identified. It is shown that when an algorithm parameterα is sufficiently large the algorithm converges monotonically to a stationary point of the rotation criterion from any starting value. Because a sufficiently largeα is in general hard to find, a modification that does not require it is introduced. Without this requirement the modified algorithm is not only very general, but also very simple. Its implementation involves little more than computing the gradient of the rotation criterion. While the modified algorithm converges monotonically from any starting value, it is not guaranteed to converge to a stationary point. It, however, does so in all of our examples. While motivated by the rotation problem in factor analysis, the algorithms discussed may be used to optimize almost any function of a not necessarily square column-wise orthonormal matrix. A number of these more general applications are considered. Empirical examples show that the modified algorithm can be reasonably fast, but its purpose is to save an investigator's effort rather than that of his or her computer. This makes it more appropriate as a research tool than as an algorithm for established methods.

87 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare Thurstonian models for paired comparisons data to the normal ogive model for ranking data, which assigns zero probabilities to all intransitive patterns.
Abstract: We relate Thurstonian models for paired comparisons data to Thurstonian models for ranking data, which assign zero probabilities to all intransitive patterns. We also propose an intermediate model for paired comparisons data that assigns nonzero probabilities to all transitive patterns and to some but not all intransitive patterns. There is a close correspondence between the multidimensional normal ogive model employed in educational testing and Thurstone's model for paired comparisons data under multiple judgment sampling with minimal identification restrictions. Alike the normal ogive model, Thurstonian models have two formulations, a factor analytic and an IRT formulation. We use the factor analytic formulation to estimate this model from the first and second order marginals of the contingency table using estimators proposed by Muthen. We also propose a statistic to assess the fit of these models to the first and second order marginals of the contingency table. This is important, as a model may reproduce well the estimated thresholds and tetrachoric correlations, yet fail to reproduce the marginals of the contingency table if the assumption of multivariate normality is incorrect. A simulation study is performed to investigate the performance of three alternative limited information estimators which differ in the procedure used in their final stage: unweighted least squares (ULS), diagonally weighted least squares (DWLS), and full weighted least squares (WLS). Both the ULS and DWLS show a good performance with medium size problems and small samples, with a slight better performance of the ULS estimator.

85 citations


Journal ArticleDOI
Ivo Ponocny1
TL;DR: In this article, a Monte Carlo algorithm realizing a family of nonparametric tests for the Rasch model is introduced which are conditional on the item and subject marginals, based on random changes of elements of data matrices without changing the marginals; most powerful tests against all alternative hypotheses are given for which a monotone characteristic may be computed from the data matrix.
Abstract: A Monte Carlo algorithm realizing a family of nonparametric tests for the Rasch model is introduced which are conditional on the item and subject marginals. The algorithm is based on random changes of elements of data matrices without changing the marginals; most powerful tests against all alternative hypotheses are given for which a monotone characteristic may be computed from the data matrix; alternatives may also be composed. Computation times are long, but exactp-values are approximated with the quality of approximation only depending on calculation time, but not on the number of persons. The power and the flexibility of the procedure is demonstrated by means of an empirical example where, among others, indicators for increased item similarities, the existence of subscales, violations of sufficiency of the raw score as well as learning processes were found. Many of the features described are implemented in the program T-Rasch 1.0 by Ponocny and Ponocny-Seliger (1999).

Journal ArticleDOI
TL;DR: In this paper, the authors developed a general approach to factor analysis that involves observed and latent variables that are assumed to be distributed in the exponential family, giving rise to a number of factor models not considered previously and enabling the study of latent variables in an integrated methodological framework, rather than as a collection of seemingly unrelated special cases.
Abstract: We develop a general approach to factor analysis that involves observed and latent variables that are assumed to be distributed in the exponential family. This gives rise to a number of factor models not considered previously and enables the study of latent variables in an integrated methodological framework, rather than as a collection of seemingly unrelated special cases. The framework accommodates a great variety of different measurement scales and accommodates cases where different latent variables have different distributions. The models are estimated with the method of simulated likelihood, which allows for higher dimensional factor solutions to be estimated than heretofore. The models are illustrated on synthetic data. We investigate their performance when the distribution of the latent variables is mis-specified and when part of the observations are missing. We study the properties of the simulation estimators relative to maximum likelihood estimation with numerical integration. We provide an empirical application to the analysis of attitudes.

Journal ArticleDOI
TL;DR: A Bayesian framework for estimating finite mixtures of the LISREL model is proposed, to augment the observed data of the manifest variables with the latent variables and the allocation variables and to obtain the Bayesian solution.
Abstract: In this paper, we propose a Bayesian framework for estimating finite mixtures of the LISREL model. The basic idea in our analysis is to augment the observed data of the manifest variables with the latent variables and the allocation variables. The Gibbs sampler is implemented to obtain the Bayesian solution. Other associated statistical inferences, such as the direct estimation of the latent variables, establishment of a goodness-of-fit assessment for a posited model, Bayesian classification, residual and outlier analyses, are discussed. The methodology is illustrated with a simulation study and a real example.

Journal ArticleDOI
TL;DR: A framework for viewing local dependency within dichotomous and polytomous items that are clustered by design is provided, and a testing procedure is presented that allows researchers to specifically identify individual item pairs that exhibit local dependency, while controlling for false positive rate.
Abstract: Researchers studying item response models are often interested in examining the effects of local dependency on the validity of the resulting conclusion from statistical inference. This paper focuses on the detection of local dependency. We provide a framework for viewing local dependency within dichotomous and polytomous items that are clustered by design, and present a testing procedure that allows researchers to specifically identify individual item pairs that exhibit local dependency, while controlling for false positive rate. Simulation results from the study indicate that the proposed method is effective. In addition, a discussion of its relation to other existing methods is provided.

Journal ArticleDOI
TL;DR: In this paper, the use of the person response function (PRF) for identifying nonfitting item score patterns was investigated, and it was concluded that the PRF can be used as a diagnostic tool in person-fit research.
Abstract: Item responses that do not fit an item response theory (IRT) model may cause the latent trait value to be inaccurately estimated. In the past two decades several statistics have been proposed that can be used to identify nonfitting item score patterns. These statistics all yieldscalar values. Here, the use of the person response function (PRF) for identifying nonfitting item score patterns was investigated. The PRF is afunction and can be used for diagnostic purposes. First, the PRF is defined in a class of IRT models that imply an invariant item ordering. Second, a person-fit method proposed by Trabin & Weiss (1983) is reformulated in a nonparametric IRT context assuming invariant item ordering, and statistical theory proposed by Rosenbaum (1987a) is adapted to test locally whether a PRF is nonincreasing. Third, a simulation study was conducted to compare the use of the PRF with the person-fit statistic ZU3. It is concluded that the PRF can be used as a diagnostic tool in person-fit research.

Journal ArticleDOI
TL;DR: Two new methods for improving the measurement precision of a general test factor are proposed and evaluated and suggest that the use of these new testing methods may significantly enhance the prediction of learning and performance in instances where standardized tests are currently used.
Abstract: Two new methods for improving the measurement precision of a general test factor are proposed and evaluated. One new method provides a multidimensional item response theory estimate obtained from conventional administrations of multiple-choice test items that span general and nuisance dimensions. The other method chooses items adaptively to maximize the precision of the general ability score. Both methods display substantial increases in precision over alternative item selection and scoring procedures. Results suggest that the use of these new testing methods may significantly enhance the prediction of learning and performance in instances where standardized tests are currently used.

Journal ArticleDOI
TL;DR: In this paper, three classes of polytomous IRT models are distinguished: the adjacent category models, the cumulative probability models, and the continuation ratio models, which includes logistic models, such as the sequential model (Tutz, 1990), and non-logistic models (such as the acceleration model (Samejima, 1995) and the nonparametric sequential models (Hemker, 1996).
Abstract: Three classes of polytomous IRT models are distinguished. These classes are the adjacent category models, the cumulative probability models, and the continuation ratio models. So far, the latter class has received relatively little attention. The class of continuation ratio models includes logistic models, such as the sequential model (Tutz, 1990), and nonlogistic models, such as the acceleration model (Samejima, 1995) and the nonparametric sequential model (Hemker, 1996). Four measurement properties are discussed. These are monotone likelihood ratio of the total score, stochastic ordering of the latent trait by the total score, stochastic ordering of the total score by the latent trait, and invariant item ordering. These properties have been investigated previously for the adjacent category models and the cumulative probability models, and for the continuation ratio models this is done here. It is shown that stochastic ordering of the total score by the latent trait is implied by all continuation ratio models, while monotone likelihood ratio of the total score and stochastic ordering on the latent trait by the total score are not implied by any of the continuation ratio models. Only the sequential rating scale model implies the property of invariant item ordering. Also, we present a Venn-diagram showing the relationships between all known polytomous IRT models from all three classes.

Journal ArticleDOI
TL;DR: Methodology is described for fitting a fuzzy consensus partition to a set of partitions of the same set of objects and comparisons are made between them and an alternative approach to obtaining a consensus fuzzy partition proposed by Sato and Sato.
Abstract: Methodology is described for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Three models defining median partitions are described: two of them are obtained from a least-squares fit of a set of membership functions, and the third (proposed by Pittau and Vichi) is acquired from a least-squares fit of a set of joint membership functions. The models are illustrated by application to both a set of hard partitions and a set of fuzzy partitions and comparisons are made between them and an alternative approach to obtaining a consensus fuzzy partition proposed by Sato and Sato; a discussion is given of some interesting differences in the results.

Journal ArticleDOI
TL;DR: In this article, a synthesis of Bock's (1972) nominal categories model and Luce's (1959) choice model is presented for mixed-effects analyses of rank-ordered data.
Abstract: This paper presents a synthesis of Bock's (1972) nominal categories model and Luce's (1959) choice model for mixed-effects analyses of rank-ordered data. It is shown that the proposed ranking model is both parsimonious and flexible in accounting for preference heterogeneity as well as fixed and random effects of covariates. Relationships to other approaches, including Takane's (1987) ideal point discriminant model and Croon's (1989) latent-class version of Luce's ranking model, are also discussed. The application focuses on a ranking study of behavioral traits that parents find desirable in children.


Journal ArticleDOI
TL;DR: In this paper, it was shown that the reliability estimate is not the only factor that determines the sequence for applying corrections for range restriction and unreliability, but rather the nature of the range restriction, not the available reliability coefficient.
Abstract: Corrections of correlations for range restriction (i.e., selection) and unreliability are common in psychometric work. The current rule of thumb for determining the order in which to apply these corrections looks to the nature of the reliability estimate (i.e., restricted or unrestricted). While intuitive, this rule of thumb is untenable when the correction includes the variable upon which selection is made, as is generally the case. Using classical test theory, we show that it is the nature of the range restriction, not the nature of the available reliability coefficient, that determines the sequence for applying corrections for range restriction and unreliability.

Journal ArticleDOI
TL;DR: This paper describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems, which uses a dynamic-programming algorithm to partially generate the efficient set of sequences for small to medium-sized problems, and a multioperation heuristic to estimate theefficient set for larger problems.
Abstract: Combinatorial optimization problems in the social and behavioral sciences are frequently associated with a variety of alternative objective criteria. Multiobjective programming is an operations research methodology that enables the quantitative analyst to investigate tradeoffs among relevant objective criteria. In this paper, we describe an interactive procedure for multiobjective asymmetric unidimensional seriation problems. This procedure uses a dynamic-programming algorithm to partially generate the efficient set of sequences for small to medium-sized problems, and a multioperation heuristic to estimate the efficient set for larger problems. The interactive multiobjective procedure is applied to an empirical data set from the psychometric literature. We conclude with a discussion of other potential areas of application in combinatorial data analysis.

Journal ArticleDOI
TL;DR: In this article, a special rotation procedure is proposed for the exploratory dynamic factor model for stationary multivariate time series, which applies separately to each univariate component series of a q-variate latent factor series.
Abstract: A special rotation procedure is proposed for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white noise, into a univariate moving-average. This is accomplished by minimizing a so-called state-space criterion that penalizes deviations of the rotated solution from a generalized state-space model with only instantaneous factor loadings. Alternative criteria are discussed in the closing section. The results of an empirical application are presented in some detail.

Journal ArticleDOI
TL;DR: The main result provides theoretical support to the practice of nonparametric item response modeling, by showing that models for long assessments have the property of asymptotic identifiability.
Abstract: The identifiability of item response models with nonparametrically specified item characteristic curves is considered. Strict identifiability is achieved, with a fixed latent trait distribution, when only a single set of item characteristic curves can possibly generate the manifest distribution of the item responses. When item characteristic curves belong to a very general class, this property cannot be achieved. However, for assessments with many items, it is shown that all models for the manifest distribution have item characteristic curves that are very near one another and pointwise differences between them converge to zero at all values of the latent trait as the number of items increases. An upper bound for the rate at which this convergence takes place is given. The main result provides theoretical support to the practice of nonparametric item response modeling, by showing that models for long assessments have the property of asymptotic identifiability.

Journal ArticleDOI
TL;DR: In this article, the asymptotic standard errors of the correlation residuals and Bentler's standardized residuals in covariance structures are derived based on the covariance matrix of raw covariance residuals.
Abstract: The asymptotic standard errors of the correlation residuals and Bentler's standardized residuals in covariance structures are derived based on the asymptotic covariance matrix of raw covariance residuals. Using these results, approximations of the asymptotic standard errors of the root mean square residuals for unstandardized or standardized residuals are derived by the delta method. Further, in mean structures, approximations of the asymptotic standard errors of residuals, standardized residuals and their summary statistics are derived in a similar manner. Simulations are carried out, which show that the asymptotic standard errors of the various types of residuals and the root mean square residuals in covariance, correlation and mean structures are close to actual ones.

Journal ArticleDOI
TL;DR: In this article, the problem of performing all pairwise comparisons among independent groups based on 20% trimmed means is addressed, and three new methods are considered, one of which achieves the desired goal, while maintaining the positive features of the percentile-t bootstrap.
Abstract: The paper takes up the problem of performing all pairwise comparisons amongJ independent groups based on 20% trimmed means. Currently, a method that stands out is the percentile-t bootstrap method where the bootstrap is used to estimate the quantiles of a Studentized maximum modulus distribution when all pairs of population trimmed means are equal. However, a concern is that in simulations, the actual probability of one or more Type I errors can drop well below the nominal level when sample sizes are small. A practical issue is whether a method can be found that corrects this problem while maintaining the positive features of the percentile-t bootstrap. Three new methods are considered here, one of which achieves the desired goal. Another method, which takes advantage of theoretical results by Singh (1998), performs almost as well but is not recommended when the smallest sample size drops below 15. In some situations, however, it gives substantially shorter confidence intervals.

Journal ArticleDOI
TL;DR: A new reversible jump MCMC method is proposed to approximate the posterior probabilities of the considered models and the evolution of political democracy in 75 developing countries based on eight measures of democracy in two different years is studied.
Abstract: We generalize factor analysis models by allowing the concentration matrix of the residuals to have nonzero off-diagonal elements. The resulting model is named graphical factor analysis model. Allowing a structure of associations gives information about the correlation left unexplained by the unobserved variables, which can be used both in the confirmatory and exploratory context. We first present a sufficient condition for global identifiability of this class of models with a generic number of factors, thereby extending the results in Stanghellini (1997) and Vicard (2000). We then consider the issue of model comparison and show that fast local computations are possible for this purpose, if the conditional independence graphs on the residuals are restricted to be decomposable and a Bayesian approach is adopted. To achieve this aim, we propose a new reversible jump MCMC method to approximate the posterior probabilities of the considered models. We then study the evolution of political democracy in 75 developing countries based on eight measures of democracy in two different years.

Journal ArticleDOI
TL;DR: The Psychometric Society is devoted to the development of psychology as a quantitative rational science as mentioned in this paper, and it is often set in contradistinction with science; art is sometimes considered different from science; and an essential aspect of a good solution is beauty.
Abstract: The Psychometric Society is “devoted to the development of Psychology as a quantitative rational science”. Engineering is often set in contradistinction with science; art is sometimes considered different from science. Why, then, juxtapose the words in the title:psychometric, engineering, andart? Because an important aspect of quantitative psychology is problem-solving, and engineering solves problems. And an essential aspect of a good solution is beauty—hence, art. In overview and with examples, this presentation describes activities that are quantitative psychology as engineering and art—that is, as design. Extended illustrations involve systems for scoring tests in realistic contexts. Allusions are made to other examples that extend the conception of quantitative psychology as engineering and art across a wider range of psychometric activities.

Journal ArticleDOI
TL;DR: In this article, sufficient and necessary conditions for global identifiability are presented for a linear logistic test model where the weights are linear functions, while conditions for local identIFiability are shown to require a model with less restrictions.
Abstract: The linear logistic test model (LLTM) specifies the item parameters as a weighted sum of basic parameters. The LLTM is a special case of a more general nonlinear logistic test model (NLTM) where the weights are partially unknown. This paper is about the identifiability of the NLTM. Sufficient and necessary conditions for global identifiability are presented for a NLTM where the weights are linear functions, while conditions for local identifiability are shown to require a model with less restrictions. It is also discussed how these conditions are checked using an algorithm due to Bekker, Merckens, and Wansbeek (1994). Several illustrations are given.

Journal ArticleDOI
TL;DR: The ordinal hierarchical classes model is shown to subsume Coombs and Kao's model for nonmetric factor analysis and an algorithm is described to fit the model to a given data set and is subsequently evaluated in an extensive simulation study.
Abstract: This paper proposes an ordinal generalization of the hierarchical classes model originally proposed by De Boeck and Rosenberg (1998). Any hierarchical classes model implies a decomposition of a two-way two-mode binary arrayM into two component matrices, called bundle matrices, which represent the association relation and the set-theoretical relations among the elements of both modes inM. Whereas the original model restricts the bundle matrices to be binary, the ordinal hierarchical classes model assumes that the bundles are ordinal variables with a prespecified number of values. This generalization results in a classification model with classes ordered along ordinal dimensions. The ordinal hierarchical classes model is shown to subsume Coombs and Kao's (1955) model for nonmetric factor analysis. An algorithm is described to fit the model to a given data set and is subsequently evaluated in an extensive simulation study. An application of the model to student housing data is discussed.