scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 2000"


Journal ArticleDOI
TL;DR: The Latent Moderated Structural Equations (LMS) as mentioned in this paper approach is a new method developed for the analysis of the general interaction model that utilizes the mixture distribution and provides a ML estimation of model parameters by adapting the EM algorithm.
Abstract: In the context of structural equation modeling, a general interaction model with multiple latent interaction effects is introduced. A stochastic analysis represents the nonnormal distribution of the joint indicator vector as a finite mixture of normal distributions. The Latent Moderated Structural Equations (LMS) approach is a new method developed for the analysis of the general interaction model that utilizes the mixture distribution and provides a ML estimation of model parameters by adapting the EM algorithm. The finite sample properties and the robustness of LMS are discussed. Finally, the applicability of the new method is illustrated by an empirical example.

1,122 citations


Journal ArticleDOI
TL;DR: In this article, an improved standard error for the Spearman correlation was proposed and the sample size required to yield a confidence interval having the desired width was examined, and a two-stage approximation to the sample-size requirement was shown to give accurate results.
Abstract: Interval estimates of the Pearson, Kendall tau-a and Spearman correlations are reviewed and an improved standard error for the Spearman correlation is proposed. The sample size required to yield a confidence interval having the desired width is examined. A two-stage approximation to the sample size requirement is shown to give accurate results.

661 citations


Journal ArticleDOI
Michael Eid1
TL;DR: In this paper, a new confirmatory factor analysis (CFA) model for multitrait-multimethod (MTMM) data sets is presented, which can be defined by only three assumptions in the framework of classical psychometric test theory.
Abstract: A new model of confirmatory factor analysis (CFA) for multitrait-multimethod (MTMM) data sets is presented. It is shown that this model can be defined by only three assumptions in the framework of classical psychometric test theory (CTT). All other properties of the model, particularly the uncorrelated-ness of the trait with the method factors are logical consequences of the definition of the model. In the model proposed there are as many trait factors as different traits considered, but the number of method factors is one fewer than the number of methods included in an MTMM study. The covariance structure implied by this model is derived, and it is shown that this model is identified even under conditions under which other CFA-MTMM models are not. The model is illustrated by two empirical applications. Furthermore, its advantages and limitations are discussed with respect to previously developed CFA models for MTMM data.

310 citations


Journal ArticleDOI
TL;DR: A unified maximum likelihood method for estimating the parameters of the generalized latent trait model will be presented and in addition the scoring of individuals on the latent dimensions is discussed.
Abstract: In this paper we discuss a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. A unified maximum likelihood method for estimating the parameters of the generalized latent trait model will be presented. We discuss in addition the scoring of individuals on the latent dimensions. The general framework presented allows, not only the analysis of manifest variables all of one type but also the simultaneous analysis of a collection of variables with different distributions. The approach used analyzes the data as they are by making assumptions about the distribution of the manifest variables directly.

246 citations


Journal ArticleDOI
TL;DR: In this article, the exact discrete model (EDMDSM) is employed to link the discrete time model parameters to the underlying continuous time model by means of nonlinear restrictions, and the EDM is generalized to cover not only time-invariant parameters but also the cases of stepwise time-varying (piecewise time invariant) parameters and parameters varying continuously over time.
Abstract: Maximum likelihood parameter estimation of the continuous time linear stochastic state space model is considered on the basis of largeN discrete time data using a structural equation modeling (SEM) program. Random subject effects are allowed to be part of the model. The exact discrete model (EDM) is employed which links the discrete time model parameters to the underlying continuous time model parameters by means of nonlinear restrictions. The EDM is generalized to cover not only time-invariant parameters but also the cases of stepwise time-varying (piecewise time-invariant) parameters and parameters varying continuously over time according to a general polynomial scheme. The identification of the continuous time parameters is discussed and an educational example is presented.

162 citations


Journal ArticleDOI
TL;DR: An hierarchical Bayes approach to modeling parameter heterogeneity in generalized linear models that combines the flexibility of semiparametric, latent class models that assume common parameters for each sub-population and the parsimony of random effects models that assumes normal distributions for the regression parameters.
Abstract: We present an hierarchical Bayes approach to modeling parameter heterogeneity in generalized linear models. The model assumes that there are relevant subpopulations and that within each subpopulation the individual-level regression coefficients have a multivariate normal distribution. However, class membership is not known a priori, so the heterogeneity in the regression coefficients becomes a finite mixture of normal distributions. This approach combines the flexibility of semiparametric, latent class models that assume common parameters for each sub-population and the parsimony of random effects models that assume normal distributions for the regression parameters. The number of subpopulations is selected to maximize the posterior probability of the model being true. Simulations are presented which document the performance of the methodology for synthetic data with known heterogeneity and number of sub-populations. An application is presented concerning preferences for various aspects of personal computers.

159 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic normal distribution of the maximum likelihood estimator of Cronbach's alpha (under normality) is derived for the case when no assumptions are made about the covariances among items.
Abstract: The asymptotic normal distribution of the maximum likelihood estimator of Cronbach's alpha (under normality) is derived for the case when no assumptions are made about the covariances among items. The asymptotic distribution is also considered for the special case of compound symmetry and compared to the exact distribution.

117 citations


Journal ArticleDOI
TL;DR: It is illustrated how Markov Chain Monte Carlo procedures such as Gibbs sampling and Metropolis-Hastings methods can be used to perform Bayesian inference, model checking and model comparison without the need for multidimensional numerical integration.
Abstract: Multilevel covariance structure models have become increasingly popular in the psychometric literature in the past few years to account for population heterogeneity and complex study designs. We develop practical simulation based procedures for Bayesian inference of multilevel binary factor analysis models. We illustrate how Markov Chain Monte Carlo procedures such as Gibbs sampling and Metropolis-Hastings methods can be used to perform Bayesian inference, model checking and model comparison without the need for multidimensional numerical integration. We illustrate the proposed estimation methods using three simulation studies and an application involving student's achievement results in different areas of mathematics.

105 citations


Journal ArticleDOI
TL;DR: In this article, it is very important to choose appropriate variables to be analyzed in multivariate analysis when there are many observed variables such as those in a questionnaire and what is actually done in scale construction with factor analysis is nothing but variable selection.
Abstract: It is very important to choose appropriate variables to be analyzed in multivariate analysis when there are many observed variables such as those in a questionnaire. What is actually done in scale construction with factor analysis is nothing but variable selection.

63 citations


Journal ArticleDOI
TL;DR: The logistic positive exponent family as discussed by the authors is a family of models, which provides asymmetric item chacteristic curves and has a consistent principle in ordering the maximum likelihood estimates of ability.
Abstract: The paper addresses and discusses whether the tradition of accepting point-symmetric item characteristic curves is justified by uncovering the inconsistent relationship between the difficulties of items and the order of maximum likelihood estimates of ability. This inconsistency is intrinsic in models that provide point-symmetric item characteristic curves, and in this paper focus is put on the normal ogive model for observation. It is also questioned if in the logistic model the sufficient statistic has forfeited the rationale that is appropriate to the psychological reality. It is observed that the logistic model can be interpreted as the case in which the inconsistency in ordering the maximum likelihood estimates is degenerated. The paper proposes a family of models, called the logistic positive exponent family, which provides asymmetric item chacteristic curves. A model in this family has a consistent principle in ordering the maximum likelihood estimates of ability. The family is divided into two subsets each of which has its own principle, and includes the logistic model as a transition from one principle to the other. Rationale and some illustrative examples are given.

59 citations


Journal ArticleDOI
TL;DR: It is shown that the typical rank of a three-way array isI when the array is tall in the sense thatJK < I < JK, and typical rank results are given for the case whereI equalsJK − J.
Abstract: The rank of a three-way array refers to the smallest number of rank-one arrays (outer products of three vectors) that generate the array as their sum. It is also the number of components required for a full decomposition of a three-way array by CANDECOMP/PARAFAC. The typical rank of a three-way array refers to the rank a three-way array has almost surely. The present paper deals with typical rank, and generalizes existing results on the typical rank ofI × J × K arrays withK = 2 to a particular class of arrays withK ≥ 2. It is shown that the typical rank isI when the array is tall in the sense thatJK − J < I < JK. In addition, typical rank results are given for the case whereI equalsJK − J.

Journal ArticleDOI
TL;DR: In this article, robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling, which reduces to a standard distribution-free methodology if all cases are equally weighted.
Abstract: Robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is properly weighted according to its distance, based on first and second order moments, from the structural model. A simple weighting function is adopted because of its flexibility with changing dimensions. The weight matrix is obtained from an adaptive way of using residuals. Test statistic and standard error estimators are given, based on iteratively reweighted least squares. The method reduces to a standard distribution-free methodology if all cases are equally weighted. Examples demonstrate the value of the robust procedure.

Journal ArticleDOI
TL;DR: In this article, a class of probability models for ranking data, the order-statistics models, is investigated and the robustness of the model is studied by considering a multivariate-t distribution.
Abstract: In this paper, a class of probability models for ranking data, the order-statistics models, is investigated. We extend the usual normal order-statistics model into one where the underlying random variables follow a multivariate normal distribution. Bayesian approach and the Gibbs sampling technique are used for parameter estimation. In addition, methods to assess the adequacy of model fit are introduced. Robustness of the model is studied by considering a multivariate-t distribution. The proposed method is applied to analyze the presidential election data of the American Psychological Association (APA).

Journal ArticleDOI
TL;DR: A paradox is resolved by introducing additional notation to deal with the item selection mechanism in the context of adaptive testing.
Abstract: Item response theory posits “local independence,” or conditional independence of item responses given item parameters and examinee proficiency parameters The usual definition of local independence, however, addresses the context of fixed tests, and initially appears to yield incorrect response-pattern probabilities in the context of adaptive testing The paradox is resolved by introducing additional notation to deal with the item selection mechanism

Journal ArticleDOI
TL;DR: Minimax designs are proposed for IRT models to overcome the problem of local optimality and are compared to sequentially constructed designs for the two parameter logistic model and the results show that minimax design can be nearly as efficient as sequentially constructing designs.
Abstract: Various different item response theory (IRT) models can be used in educational and psychological measurement to analyze test data. One of the major drawbacks of these models is that efficient parameter estimation can only be achieved with very large data sets. Therefore, it is often worthwhile to search for designs of the test data that in some way will optimize the parameter estimates. The results from the statistical theory on optimal design can be applied for efficient estimation of the parameters. A major problem in finding an optimal design for IRT models is that the designs are only optimal for a given set of parameters, that is, they are locally optimal. Locally optimal designs can be constructed with a sequential design procedure. In this paper minimax designs are proposed for IRT models to overcome the problem of local optimality. Minimax designs are compared to sequentially constructed designs for the two parameter logistic model and the results show that minimax design can be nearly as efficient as sequentially constructed designs.

Journal ArticleDOI
TL;DR: In this paper, a closed form expression for the asymptotic bias of both the g.l.b. and its numerator, under the assumption that the rank of the reduced covariance matrix is at or above the Ledermann bound, and that the nonnegativity constraints on the diagonal elements of the matrix of unique variances are inactive.
Abstract: In theory, the greatest lower bound (g.l.b.) to reliability is the best possible lower bound to the reliability based on single test administration. Yet the practical use of the g.l.b. has been severely hindered by sampling bias problems. It is well known that the g.l.b. based on small samples (even a sample of one thousand subjects is not generally enough) may severely overestimate the population value, and statistical treatment of the bias has been badly missing. The only results obtained so far are concerned with the asymptotic variance of the g.l.b. and of its numerator (the maximum possible error variance of a test), based on first order derivatives and the asumption of multivariate normality. The present paper extends these results by offering explicit expressions for the second order derivatives. This yields a closed form expression for the asymptotic bias of both the g.l.b. and its numerator, under the assumptions that the rank of the reduced covariance matrix is at or above the Ledermann bound, and that the nonnegativity constraints on the diagonal elements of the matrix of unique variances are inactive. It is also shown that, when the reduced rank is at its highest possible value (i.e., the number of variables minus one), the numerator of the g.l.b. is asymptotically unbiased, and the asymptotic bias of the g.l.b. is negative. The latter results are contrary to common belief, but apply only to cases where the number of variables is small. The asymptotic results are illustrated by numerical examples.

Journal ArticleDOI
TL;DR: In this article, the problem of observed-score equating with a multivariate ability structure underlying the scores has been studied, and possible ways of dealing with the requirement of known ability are discussed, including such methods as conditional observed score equating at point estimates or posterior expected conditional equating.
Abstract: Observed-score equating using the marginal distributions of two tests is not necessarily the universally best approach it has been claimed to be. On the other hand, equating using the conditional distributions given the ability level of the examinee is theoretically ideal. Possible ways of dealing with the requirement of known ability are discussed, including such methods as conditional observed-score equating at point estimates or posterior expected conditional equating. The methods are generalized to the problem of observed-score equating with a multivariate ability structure underlying the scores.

Journal ArticleDOI
TL;DR: Cureton and Mulaik as discussed by the authors proposed the Weighted Varimax (WV) rotation method, which was applied to Direct Oblimin (Clarkson & Jennrich, 1988).
Abstract: Cureton & Mulaik (1975) proposed the Weighted Varimax rotation so that Varimax (Kaiser, 1958) could reach simple solutions when the complexities of the variables in the solution are larger than one. In the present paper the weighting procedure proposed by Cureton & Mulaik (1975) is applied to Direct Oblimin (Clarkson & Jennrich, 1988), and the rotation method obtained is called Weighted Oblimin. It has been tested on artificial complex data and real data, and the results seem to indicate that, even though Direct Oblimin rotation fails when applied to complex data, Weighted Oblimin gives good results if a variable with complexity one can be found for each factor in the pattern. Although the weighting procedure proposed by Cureton & Mulaik is based on Landahl's (1938) expression for orthogonal factors, Weighted Oblimin seems to be adequate even with highly oblique factors. The new rotation method was compared to other rotation methods based on the same weighting procedure and, whenever a variable with complexity one could be found for each factor in the pattern, Weighted Oblimin gave the best results. When rotating a simple empirical loading matrix, Weighted Oblimin seemed to slightly increase the performance of Direct Oblimin.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a general method that adjusts for the inflation of information associated with a test containing item clusters, and a computational scheme was presented for the evaluation of the factor of adjustment for clusters in the restrictive case of two items per cluster, and the general case of more than two item per cluster.
Abstract: When multiple items are clustered around a reading passage, the local independence assumption in item response theory is often violated. The amount of information contained in an item cluster is usually overestimated if violation of local independence is ignored and items are treated as locally independent when in fact they are not. In this article we provide a general method that adjusts for the inflation of information associated with a test containing item clusters. A computational scheme was presented for the evaluation of the factor of adjustment for clusters in the restrictive case of two items per cluster, and the general case of more than two items per cluster. The methodology was motivated by a study of the NAEP Reading Assessment. We present a simulated study along with an analysis of a NAEP data set.

Journal ArticleDOI
TL;DR: This paper addresses the loss of information in CML estimation by using the information concept of F-information (Liang, 1983), which makes it possible to specify the conditions for no loss of Information and to define a quantification of information loss.
Abstract: In item response models of the Rasch type (Fischer & Molenaar, 1995), item parameters are often estimated by the conditional maximum likelihood (CML) method. This paper addresses the loss of information in CML estimation by using the information concept of F-information (Liang, 1983). This concept makes it possible to specify the conditions for no loss of information and to define a quantification of information loss. For the dichotomous Rasch model, the derivations will be given in detail to show the use of the F-information concept for making comparisons for different estimation methods. It is shown that by using CML for item parameter estimation, some information is almost always lost. But compared to JML (joint maximum likelihood) as well as to MML (marginal maximum likelihood) the loss is very small. The reported efficiency in the use of information of CML to JML and to MML in several comparisons is always larger than 93%, and in tests with a length of 20 items or more, larger than 99%.

Journal ArticleDOI
TL;DR: In this article, the asymptotic correlations between the estimates of factor and component loadings are obtained for the exploratory factor analysis model with the assumption of a multivariate normal distribution for manifest variables.
Abstract: The asymptotic correlations between the estimates of factor and component loadings are obtained for the exploratory factor analysis model with the assumption of a multivariate normal distribution for manifest variables. The asymptotic correlations are derived for the cases of unstandardized and standardized manifest variables with orthogonal and oblique rotations. Based on the above results, the asymptotic standard errors for estimated correlations between factors and components are derived. Further, the asymptotic standard error of the mean squared canonical correlation for factors and components, which is an overall index for the closeness of factors and components, is derived. The results of a Monte Carlo simulation are presented to show the usefulness of the asymptotic results in the data with a finite sample size.

Journal ArticleDOI
TL;DR: In this paper, the equivalence classes of the Thurstonian ranking models are defined, which defines a more meaningful partition of the covariance structures of the ranking models and defines a family of equivalence structures satisfying Case III and Case V conditions.
Abstract: It is well-known that the representations of the Thurstonian Case III and Case V models for paired comparison data are not unique Similarly, when analyzing ranking data, other equivalent covariance structures can substitute for those given by Thurstone in these cases That is, we may more broadly define the family of covariance structures satisfying Case III and Case V conditions This paper introduces the notion of equivalence classes which defines a more meaningful partition of the covariance structures of the Thurstonian ranking models In addition, the equivalence classes of Case V and Case III are completely characterized

Journal ArticleDOI
TL;DR: In this paper, it is shown that the authors fail to adopt an adequate standard error of the estimator, the statistical properties of their indices are unclear, which can lead to paradoxical conclusions.
Abstract: In the literature on the measurement of change,reliable change is usually determined by means of a confidence interval around an observed value of a statistic that estimates thetrue change. In recent literature on the efficacy of psychotherapies, attention has been particularly directed at the improvement of the estimation of the true change. Reliable Change Indices, incorporating thereliability-weighted measure of individual change, also known as Kelley's formula, have been proposed. According to current practice, these indices are defined as the ratio of such an estimator and an intuitively appealing criterion and then regarded as standard normally distributed statistics. However, because the authors fail to adopt an adequate standard error of the estimator, the statistical properties of their indices are unclear. In this article, it is shown that this can lead to paradoxical conclusions. The adjusted standard error is derived.

Journal ArticleDOI
TL;DR: In this article, the effects of rescaling on estimated standard errors of factor loading estimates, and the consequent effect onz-statistics, are studied in three variants of the classical exploratory factor model under canonical, raw varimax, and normal varimsax solutions.
Abstract: Current practice in factor analysis typically involves analysis of correlation rather than covariance matrices. We study whether the standardz-statistic that evaluates whether a factor loading is statistically necessary is correctly applied in such situations and more generally when the variables being analyzed are arbitrarily rescaled. Effects of rescaling on estimated standard errors of factor loading estimates, and the consequent effect onz-statistics, are studied in three variants of the classical exploratory factor model under canonical, raw varimax, and normal varimax solutions. For models with analytical solutions we find that some of the standard errors as well as their estimates are scale equivariant, while others are invariant. For a model in which an analytical solution does not exist, we use an example to illustrate that neither the factor loading estimates nor the standard error estimates possess scale equivariance or invariance, implying that different conclusions could be obtained with different scalings. Together with the prior findings on parameter estimates, these results provide new guidance for a key statistical aspect of factor analysis.

Journal ArticleDOI
TL;DR: In this article, the authors investigated under what conditions the matrix of factor loadings from the factor analysis model with equal unique variances (i.e., the difference between the largest and the smallest value of unique variance is small relative to the sizes of the column sums of squared factor loads, and they showed that the two models will give similar matrices of factor loads if Schneeweiss' condition holds.
Abstract: We investigate under what conditions the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. We show that the two models will give similar matrices of factor loadings if Schneeweiss' condition, that the difference between the largest and the smallest value of unique variances is small relative to the sizes of the column sums of squared factor loadings, holds. Furthermore, we generalize our results and discus the conditions under which the matrix of factor loadings from the regular factor analysis model will be well approximated by the matrix of factor loadings from Joreskog's image factor analysis model. Especially, we discuss Guttman's condition (i.e., the number of variables increases without limit) for the two models to agree, in relation with the condition we have shown, and conclude that Schneeweiss' condition is a generalization of Guttman's condition. Some implications for practice are discussed.

Journal ArticleDOI
TL;DR: In this paper, the delta method is used to estimate the covariance matrix in the normal maximum likelihood function, where elements corresponding to those that are not jointly observed are unidentified. But the delta can be applied to other problems such as regression factor analysis.
Abstract: Situations sometimes arise in which variables collected in a study are not jointly observed. This typically occurs because of study design. An example is an equating study where distinct groups of subjects are administered different sections of a test. In the normal maximum likelihood function to estimate the covariance matrix among all variables, elements corresponding to those that are not jointly observed are unidentified. If a factor analysis model holds for the variables, however, then all sections of the matrix can be accurately estimated, using the fact that the covariances are a function of the factor loadings. Standard errors of the estimated covariances can be obtained by the delta method. In addition to estimating the covariance matrix in this design, the method can be applied to other problems such as regression factor analysis. Two examples are presented to illustrate the method.

Journal ArticleDOI
TL;DR: The Alternating Length-Constrained Non-Negative Least-Squares (ALC-NNLS) algorithm is proposed, which minimizes the nonnegative least-squares loss function over the parameters under a length constraint, by alternatingly minimizing over one parameter while keeping the others fixed.
Abstract: An important feature of distance-based principal components analysis, is that the variables can be optimally transformed. For monotone spline transformation, a nonnegative least-squares problem with a length constraint has to be solved in each iteration. As an alternative algorithm to Lawson and Hanson (1974), we propose the Alternating Length-Constrained Non-Negative Least-Squares (ALC-NNLS) algorithm, which minimizes the nonnegative least-squares loss function over the parameters under a length constraint, by alternatingly minimizing over one parameter while keeping the others fixed. Several properties of the new algorithm are discussed. A Monte Carlo study is presented which shows that for most cases in distance-based principal components analysis, ALC-NNLS performs as good as the method of Lawson and Hanson or sometimes even better in terms of the quality of the solution.

Journal ArticleDOI
TL;DR: In this article, Tatsuoka et al. proposed a regression model to decompose the residual associations between the polytomous variables based on the RC(M) association model, which facilitates joint estimation of effects due to manifest and omitted (continuous) variables without requiring numerical integration.
Abstract: When modeling the relationship between two nominal categorical variables, it is often desirable to include covariates to understand how individuals differ in their response behavior. Typically, however, not all the relevant covariates are available, with the result that the measured variables cannot fully account for the associations between the nominal variables. Under the assumption that the observed and unobserved variables follow a homogeneous conditional Gaussian distribution, this paper proposesRC(M) regression models to decompose the residual associations between the polytomous variables. Based on Goodman's (1979, 1985)RC(M) association model, a distinctive feature ofRC(M) regression models is that they facilitate the joint estimation of effects due to manifest and omitted (continuous) variables without requiring numerical integration. TheRC(M) regression models are illustrated using data from the High School and Beyond study (Tatsuoka & Lohnes, 1988).

Journal ArticleDOI
Xu Liqun1
TL;DR: In this paper, a multistage ranking model is proposed, which represents a generalization of Luce's model and uses the then×n item-rank relative frequency matrix (p-matrix) as a device for summarizing a set of rankings.
Abstract: In this paper, we propose a (n−1)2 parameter, multistage ranking model, which represents a generalization of Luce's model We propose then×n item-rank relative frequency matrix (p-matrix) as a device for summarizing a set of rankings As an alternative to the traditional maximum likelihood estimation, for the proposed model we suggest a method which estimates the parameters from thep-matrix An illustrative numerical example is given The proposed model and its differences from Luce's model are briefly discussed We also show some specialp-matrix patterns possessed by the Thurstonian models and distance-based models

Journal ArticleDOI
Ivo Ponocny1
TL;DR: In this article, a new algorithm for obtaining exact person fit indexes for the Rasch model is introduced which realizes most powerful tests for a very general family of alternative hypotheses, including tests concerning DIF as well as model-deviating item correlations.
Abstract: A new algorithm for obtaining exact person fit indexes for the Rasch model is introduced which realizes most powerful tests for a very general family of alternative hypotheses, including tests concerning DIF as well as model-deviating item correlations. The method is also used as a goodness-of-fit test for whole data sets where the item parameters are assumed to be known. For tests with 30 items at most, exact values are obtained, for longer tests a Monte Carlo-algorithm is proposed. Simulated examples and an empirical investigation demonstrate test power and applicability to item elimination.