scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 1988"


Journal ArticleDOI
TL;DR: It is shown that when thec parameters are unequal, the area between two ICCs is infinite and the significance of the exact area measures for item bias research is discussed.
Abstract: Formulas for computing the exact signed and unsigned areas between two item characteristic curves (ICCs) are presented. It is further shown that when thec parameters are unequal, the area between two ICCs is infinite. The significance of the exact area measures for item bias research is discussed.

405 citations


Journal ArticleDOI
TL;DR: A general model is developed for the analysis of multivariate multilevel data structures and special cases of the model include repeated measures designs, multiple matrix samples, multileVEL latent variable models, multiple time series, and variance and covariance component models.
Abstract: A general model is developed for the analysis of multivariate multilevel data structures. Special cases of the model include repeated measures designs, multiple matrix samples, multilevel latent variable models, multiple time series, and variance and covariance component models.

252 citations


Journal ArticleDOI
TL;DR: A discrete, categorical model and a corresponding data-analysis method are presented for two-way two-mode data arrays with 0, 1 entries that aims at recovering the underlying structure in a data matrix by minimizing the discrepancies between the data and the recovered structure.
Abstract: A discrete, categorical model and a corresponding data-analysis method are presented for two-way two-mode (objects × attributes) data arrays with 0, 1 entries. The model contains the following two basic components: a set-theoretical formulation of the relations among objects and attributes; a Boolean decomposition of the matrix. The set-theoretical formulation defines a subset of the possible decompositions as consistent with it. A general method for graphically representing the set-theoretical decomposition is described. The data-analysis algorithm, dubbed HICLAS, aims at recovering the underlying structure in a data matrix by minimizing the discrepancies between the data and the recovered structure. HICLAS is evaluated with a simulation study and two empirical applications.

222 citations


Journal ArticleDOI
TL;DR: In this paper, a general model for homogeneous, dichotomous items when the answer key is not known a priori is presented, which is useful when a researcher wants to study objectively the knowledge possessed by members of a culturally coherent group that the researcher is not a member of.
Abstract: A general model is presented for homogeneous, dichotomous items when the answer key is not known a priori. The model is structurally related to the two-class latent structure model with the roles of respondents and items interchanged. For very small sets of respondents, iterative maximum likelihood estimates of the parameters can be obtained by existing methods. For other situations, new estimation methods are developed and assessed with Monte Carlo data. The answer key can be accurately reconstructed with relatively small sets of respondents. The model is useful when a researcher wants to study objectively the knowledge possessed by members of a culturally coherent group that the researcher is not a member of.

202 citations


Journal ArticleDOI
TL;DR: In this article, the fit of the Rasch model is tested by constructing functions of the data, on which model tests can be based that have power against specific model violations, and the asymptotic distribution of these tests are derived by using the theoretical framework of testing model fit in general multinomial and product-multinomial models.
Abstract: The present paper is concerned with testing the fit of the Rasch model. It is shown that this can be achieved by constructing functions of the data, on which model tests can be based that have power against specific model violations. It is shown that the asymptotic distribution of these tests can be derived by using the theoretical framework of testing model fit in general multinomial and product-multinomial models. The model tests are presented in two versions: one that can be used in the context of marginal maximum likelihood estimation and one that can be applied in the context of conditional maximum likelihood estimation.

145 citations


Journal ArticleDOI
TL;DR: In this paper, a method for determining a sample size that will achieve a prespecified bound on confidence interval width for the interrater agreement measure, kappa, is presented.
Abstract: This paper gives a method for determining a sample size that will achieve a prespecified bound on confidence interval width for the interrater agreement measure,κ. The same results can be used when a prespecified power is desired for testing hypotheses about the value of kappa. An example from the literature is used to illustrate the methods proposed here.

128 citations


Journal ArticleDOI
TL;DR: In this paper, the authors dealt with two-group classification when a unidimensional latent trait,ϑ, is appropriate for explaining the data,X, and showed that if X has monotone likelihood ratio then optimal allocation rules can be based on its magnitude when allocation must be made to one of two groups related toϑ.
Abstract: This paper deals with two-group classification when a unidimensional latent trait,ϑ, is appropriate for explaining the data,X. It is shown that ifX has monotone likelihood ratio then optimal allocation rules can be based on its magnitude when allocation must be made to one of two groups related toϑ. These groups may relate toϑ probabilistically via a non-decreasing functionp(ϑ), or may be defined by all subjects above or below a selected value onϑ. In the case where the data arise from dichotomous items, then only the assumption that the items have nondecreasing item characteristic functions is enough to ensure that the unweighted sum of responses (the number-right score or raw score) possesses this fundamental monotone likelihood ratio property.

125 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a consistent estimate of the greatest defensible internal consistency coefficient by factoring item covariances, which is based on Guttman's "immediate retest reliability".
Abstract: A coefficient derived from communalities of test parts has been proposed as greatest lower bound to Guttman's “immediate retest reliability.” The communalities have at times been calculated from covariances between itemsets, which tends to underestimate appreciably. When items are experimentally independent, a consistent estimate of the greatest defensible internal-consistency coefficient is obtained by factoring item covariances. In samples of modest size, this analysis capitalizes on chance; an estimate subject to less upward bias is suggested. For estimating alternate-forms reliability, communality-based coefficients are less appropriate than stratified alpha.

108 citations


Journal ArticleDOI
TL;DR: In this article, an extension of component analysis to longitudinal or cross-sectional data is presented, where components are derived under the restriction of invariant and/or stationary compositing weights.
Abstract: An extension of component analysis to longitudinal or cross-sectional data is presented. In this method, components are derived under the restriction of invariant and/or stationary compositing weights. Optimal compositing weights are found numerically. The method can be generalized to allow differential weighting of the observed variables in deriving the component solution. Some choices of weightings are discussed. An illustration of the method using real data is presented.

88 citations


Journal ArticleDOI
TL;DR: A new method is proposed for the statistical analysis of dyadic social interaction data measured over time based on loglinear models for the probabilities for various dyad (or actor pair) states and generalizes the statistical methods proposed by Holland and Leinhardt (1981), Fienberg, Meyer, & Wasserman (1985), and Wasserman (1987) for social network data.
Abstract: A new method is proposed for the statistical analysis of dyadic social interaction data measured over time. The data to be studied are assumed to be realizations of a social network of a fixed set of actors interacting on a single relation. The method is based on loglinear models for the probabilities for various dyad (or actor pair) states and generalizes the statistical methods proposed by Holland and Leinhardt (1981), Fienberg, Meyer, & Wasserman (1985), and Wasserman (1987) for social network data. Two statistical models are described: the first is an “associative” approach that allows for the study of how the network has changed over time; the second is a “predictive” approach that permits the researcher to model one time point as a function of previous time points. These approaches are briefly contrasted with earlier methods for the sequential analysis of social networks and are illustrated with an example of longitudinal sociometric data.

85 citations


Journal ArticleDOI
TL;DR: Most powerful tests for inappropriateness are described together with methods for computing their power and a recursion greatly simplifying the calculation of optimal test statistics is described and illustrated.
Abstract: The test-taking behavior of some examinees may be so idiosyncratic that their test scores may not be comparable to the scores of more typical examinees Appropriateness measurement attempts to use answer patterns to recognize atypical examinees In this report appropriateness measurement procedures are viewed as statistical tests for choosing between a null hypothesis of normal test-taking behavior and an alternative hypothesis of atypical test-taking behavior Most powerful tests for inappropriateness are described together with methods for computing their power A recursion greatly simplifying the calculation of optimal test statistics is described and illustrated

Journal ArticleDOI
TL;DR: In this paper, the authors apply the notion of optimal scaling to sets of variables by using sums within sets, called OVERALS, which can be multiple or single transformations, with transformations consisting of three types: nominal, ordinal, and numerical.
Abstract: Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple or single. The single transformations consist of three types: nominal, ordinal, and numerical. The corresponding OVERALS computer program minimizes a least squares loss function by using an alternating least squares algorithm. Many existing linear and nonlinear multivariate analysis techniques are shown to be special cases of OVERALS. An application to data from an epidemiological survey is presented.

Journal ArticleDOI
TL;DR: The authors describes circumstances under which collateral information about examinees may be used to make inferences about item parameters more precise, and under what circumstances such information must be used for obtaining correct inferences.
Abstract: Standard procedures for estimating item parameters in item response theory (IRT) ignore collateral information that may be available about examinees, such as their standing on demographic and educational variables. This paper describes circumstances under which collateral information about examineesmay be used to make inferences about item parameters more precise, and circumstances under which itmust be used to obtain correct inferences.

Journal ArticleDOI
TL;DR: In this paper, a method for empirically testing the appropriateness of using tetrachoric correlations for a set of dichotomous variables is proposed, where trivariate marginal information is used to get the set of one-degree of freedom chi-square tests of the underlying normality.
Abstract: A method is proposed for empirically testing the appropriateness of using tetrachoric correlations for a set of dichotomous variables. Trivariate marginal information is used to get a set of one-degree of freedom chi-square tests of the underlying normality. It is argued that such tests should preferrably preceed further modeling of tetrachorics, for example, modeling by factor analysis. The assumptions are tested in some real and simulated data.

Journal ArticleDOI
TL;DR: Most of the currently used analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria.
Abstract: Most of the currently used analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is given. These algorithms represent fairly straightforward extensions of present methodology, and appear to be the best methods currently available.

Journal ArticleDOI
TL;DR: In this article, it was shown that the class of nonparametric marginal logistic (NML) models is equivalent to the class where the latent ability assumes at most (m + 2)/2 values.
Abstract: Consider the class of two parameter marginal logistic (Rasch) models, for a test ofm True-False items, where the latent ability is assumed to be bounded. Using results of Karlin and Studen, we show that this class of nonparametric marginal logistic (NML) models is equivalent to the class of marginal logistic models where the latent ability assumes at most (m + 2)/2 values. This equivalence has two implications. First, estimation for the NML model is accomplished by estimating the parameters of a discrete marginal logistic model. Second, consistency for the maximum likelihood estimates of the NML model can be shown (whenm is odd) using the results of Kiefer and Wolfowitz. An example is presented which demonstrates the estimation strategy and contrasts the NML model with a normal marginal logistic model.

Journal ArticleDOI
TL;DR: In this article, an integrated method for rotating and rescaling a set of configurations to optimal agreement in subspaces of varying dimensionalities is developed, which relates existing orthogonal rotation techniques as special cases within a general framework.
Abstract: An integrated method for rotating and rescaling a set of configurations to optimal agreement in subspaces of varying dimensionalities is developed. The approach relates existing orthogonal rotation techniques as special cases within a general framework based on a partition of variation which provides convenient measures of agreement. In addition to the well-known Procrustes and inner product optimality criteria, a criterion which maximizes the “consensus” among subspaces of the configurations is suggested. Since agreement of subspaces of the configurations can be examined and compared, rotation and rescaling is extended from a data transformation technique to an analytical method.

Journal ArticleDOI
TL;DR: In this paper, a unified treatment of fully general computational solutions for two of these criteria, Maxbet and Maxdiff, is presented. But it is argued that the Maxdiff solution should be preferred to the Maxbet solution whenever the two criteria coincide.
Abstract: Van de Geer has reviewed various criteria for transforming two or more matrices to maximal agreement, subject to orthogonality constraints. The criteria have applications in the context of matching factor or configuration matrices and in the context of canonical correlation analysis for two or more matrices. The present paper summarizes and gives a unified treatment of fully general computational solutions for two of these criteria, Maxbet and Maxdiff. These solutions will be shown to encompass various well-known methods as special cases. It will be argued that the Maxdiff solution should be preferred to the Maxbet solution whenever the two criteria coincide. Horst's Maxcor method will be shown to lack the property of monotone convergence. Finally, simultaneous and successive versions of the Maxbet and Maxdiff solutions will be treated as special cases of a fully flexible approach where the columns of the rotation matrices are obtained in successive blocks.

Journal ArticleDOI
TL;DR: In this article, it is shown that, given multivariate normality, a condition called multivariate sphericity of the covariance matrix is both necessary and sufficient for the validity of the MMM analysis.
Abstract: Repeated measures on multivariate responses can be analyzed according to either of two models: a doubly multivariate model (DMM) or a multivariate mixed model (MMM). This paper reviews both models and gives three new results concerning the MMM. The first result is, primarily, of theoretical interest; the second and third have implications for practice. First, it is shown that, given multivariate normality, a condition called multivariate sphericity of the covariance matrix is both necessary and sufficient for the validity of the MMM analysis. To test for departure from multivariate sphericity, the likelihood ratio test can be employed. The second result is an approximation to the null distribution of the likelihood ratio test statistic, useful for moderate sample sizes. Third, for situations satisfying multivariate normality, but not multivariate sphericity, a multivariate e correction factor is derived. The e correction factor generalizes Box's e and can be used to construct an adjusted MMM test.

Journal ArticleDOI
TL;DR: A general rating model as well as a two-parameter model with location and dispersion parameters, analogous to Andrich's Dislocmodel are derived, including parameter estimation via the EM-algorithm.
Abstract: A general approach for analyzing rating data with latent class models is described, which parallels rating models in the framework of latent trait theory. A general rating model as well as a two-parameter model with location and dispersion parameters, analogous to Andrich's Dislocmodel are derived, including parameter estimation via the EM-algorithm. Two examples illustrate the application of the models and their statisticalcontrol. Model restrictions through equality constrains are discussed and multiparameter generalizations are outlined.

Journal ArticleDOI
TL;DR: In this article, it was shown that for this array the Candecomp/Parafac loss has an infimum of 1, and that the array can be used to challenge the tradition of fitting Indscal and related models by means of the CPM process.
Abstract: Kruskal, Harshman and Lundy have contrived a special 2 × 2 × 2 array to examine formal properties of degenerate Candecomp/Parafac solutions. It is shown that for this array the Candecomp/Parafac loss has an infimum of 1. In addition, the array will be used to challenge the tradition of fitting Indscal and related models by means of the Candecomp/Parafac process.

Journal ArticleDOI
TL;DR: In this article, a model and computational procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability, and variance-covariance expressions for the sample estimates defined earlier are derived, based on application of the delta method.
Abstract: A model and computational procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Next, variance-covariance expressions for the sample estimates defined earlier are derived, based on application of the delta method. Results of a Monte Carlo study are presented in which the adequacy of the derived expressions was assessed for a large number of data forms and potential hypotheses encountered in the behavioral sciences. It is shown that, based on the proposed procedures, confidence intervals for single coefficients are reasonably precise. Two-sample hypothesis tests, for both independent and dependent samples, are also accurate. However, for hypothesis tests involving a larger number of coefficients than two—both independent and dependent—the proposed procedures require largens for adequate precision. Results of a preliminary power analysis reveal no serious loss in efficiency resulting from correction for attenuation. Implications for practice are discussed.

Journal ArticleDOI
TL;DR: In this paper, a form of correspondence analysis is proposed which decomposes the departure from the quasi-independence model, which is a good alternative to ordinary correspondence analysis in cases where the use of the latter is either impossible or not recommended, for example, in case of missing data or structural zero.
Abstract: Correspondence analysis can be described as a technique which decomposes the departure from independence in a two-way contingency table. In this paper a form of correspondence analysis is proposed which decomposes the departure from the quasi-independence model. This form seems to be a good alternative to ordinary correspondence analysis in cases where the use of the latter is either impossible or not recommended, for example, in case of missing data or structural zeros. It is shown that Nora's reconstitution of order zero, a procedure well-known in the French literature, is formally identical to our correspondence analysis of incomplete tables. Therefore, reconstitution of order zero can also be interpreted as providing a decomposition of the residuals from the quasi-independence model. Furthermore, correspondence analysis of incomplete tables can be performed using existing programs for ordinary correspondence analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors study the class of multivariate distributions in which all bivariate regressions can be linearized by separate transformation of each of the variables, and show that a two-stage procedure which first scales the variables optimally, and then fits a simultaneous equations model, has desirable properties.
Abstract: We study the class of multivariate distributions in which all bivariate regressions can be linearized by separate transformation of each of the variables. This class seems more realistic than the multivariate normal or the elliptical distributions, and at the same time its study allows us to combine the results from multivariate analysis with optimal scaling and classical multivariate analysis. In particular a two-stage procedure which first scales the variables optimally, and then fits a simultaneous equations model, is studied in detail and is shown to have some desirable properties.

Journal ArticleDOI
TL;DR: Goodman's (1979, 1981, 1985) loglinear formulation for bi-way contingency tables is extended to tables with or without missing cells and is used for exploratory purposes.
Abstract: Goodman's (1979, 1981, 1985) loglinear formulation for bi-way contingency tables is extended to tables with or without missing cells and is used for exploratory purposes. A similar formulation is done for three-way tables and generalizations of correspondence analysis are deduced. A generalized version of Goodman's algorithm, based on Newton's elementary unidimensional method is used to estimate the scores in all cases.

Journal ArticleDOI
TL;DR: In this paper, the authors present new results about the properties of the optimal solution to this problem, and also discuss in detail the convergence of an alternating least squares algorithm for ordinal data.
Abstract: Canonical analysis of two convex polyhedral cones consists in looking for two vectors (one in each cone) whose square cosine is a maximum. This paper presents new results about the properties of the optimal solution to this problem, and also discusses in detail the convergence of an alternating least squares algorithm. The set of scalings of an ordinal variable is a convex polyhedral cone, which thus plays an important role in optimal scaling methods for the analysis of ordinal data. Monotone analysis of variance, and correspondence analysis subject to an ordinal constraint on one of the factors are both canonical analyses of a convex polyhedral cone and a subspace. Optimal multiple regression of a dependent ordinal variable on a set of independent ordinal variables is a canonical analysis of two convex polyhedral cones as long as the signs of the regression coefficients are given. We discuss these three situations and illustrate them by examples.


Journal ArticleDOI
TL;DR: In this paper, two measures of multivariate association, based on Wilks' Λ and the Bartlett-Nanda-Pillai trace criterionV, respectively, are compared in terms of properties of the univariate R2 which they generalize.
Abstract: Two kinds of measures of multivariate association, based on Wilks' Λ and the Bartlett-Nanda-Pillai trace criterionV, respectively, are compared in terms of properties of the univariateR2 which they generalize. A unified set of derivations of the properties is provided which are self-contained and not restricted to decompositions in canonical variates. One conclusion is that asymmetric index based on Λ allows generalization of the multiplicative decomposition ofR2 in terms of squared partial correlations, but not the additive decomposition in terms of squared semipartial correlations, while the reverse is true for anasymmetric index based onV.

Journal ArticleDOI
TL;DR: A macro for calculating the Hubert and Arabie (1985) adjusted Rand statistic gives a measure of classification agreement between two partitions of the same set of objects.
Abstract: A macro for calculating the Hubert and Arabie (1985) adjusted Rand statistic is presented. The adjusted Rand statistic gives a measure of classification agreement between two partitions of the same set of objects. The macro is written in the SAS macro language and makes extensive use of SAS/IML software (SAS Institute, 1985a, 1985b). The macro uses two different methods of handling missing values. The default method assumes that each object that has a missing value for the classification category is in its own separate category or cluster for that classification. The optional method places all objects with a missing value for the classification category into the same category for that classification.

Journal ArticleDOI
TL;DR: The authors' FORTRAN algorithm FACAIC for choosing the number of factors for an orthogonal factor model using Akaike's Information Criterion utilizes the IMSL subroutine OFCOMM.
Abstract: This paper describes the authors' FORTRAN algorithm FACAIC for choosing the number of factors for an orthogonal factor model using Akaike's Information Criterion. FACAIC utilizes the IMSL subroutine OFCOMM.