scispace - formally typeset
Search or ask a question

Showing papers in "Psychometrika in 1978"


Journal ArticleDOI
TL;DR: In this paper, a rating response mechanism for ordered categories, which is related to the traditional threshold formulation but distinctively different from it, is formulated, in which subject and item parameters are derived in terms of thresholds on a latent continuum and discriminations at the thresholds.
Abstract: A rating response mechanism for ordered categories, which is related to the traditional threshold formulation but distinctively different from it, is formulated. In addition to the subject and item parameters two other sets of parameters, which can be interpreted in terms of thresholds on a latent continuum and discriminations at the thresholds, are obtained. These parameters are identified with the category coefficients and the scoring function of the Rasch model for polychotomous responses in which the latent trait is assumed uni-dimensional. In the case where the threshold discriminations are equal, the scoring of successive categories by the familiar assignment of successive integers is justified. In the case where distances between thresholds are also equal, a simple pattern of category coefficients is shown to follow.

2,709 citations


Journal ArticleDOI
TL;DR: In this paper, a general approach to the analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest.
Abstract: A general approach to the analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed. Several different types of covariance structures are considered as special cases of the general model. These include models for sets of congeneric tests, models for confirmatory and exploratory factor analysis, models for estimation of variance and covariance components, regression models with measurement errors, path analysis models, simplex and circumplex models. Many of the different types of covariance structures are illustrated by means of real data.

864 citations


Journal ArticleDOI
TL;DR: In this paper, a method of introducing a controlled degree of skew and kurtosis for Monte Carlo studies was derived, and the form of such a transformation on normal deviates [X ≈N(0, 1)] isY =a +bX +cX2 +dX3.
Abstract: A method of introducing a controlled degree of skew and kurtosis for Monte Carlo studies was derived. The form of such a transformation on normal deviates [X ≈N(0, 1)] isY =a +bX +cX 2 +dX 3. Analytic and empirical validation of the method is demonstrated.

673 citations


Journal ArticleDOI
Bengt Muthén1
TL;DR: In this article, a generalized least squares estimator is proposed, which asymptotically is as efficient as the corresponding estimator of Christoffersson, but which demands less computing time.
Abstract: A new method is proposed for the factor analysis of dichotomous variables. Similar to the method of Christoffersson this uses information from the first and second order proportions to fit a multiple factor model. Through a transformation into a new set of sample characteristics, the estimation is considerably simplified. A generalized least-squares estimator is proposed, which asymptotically is as efficient as the corresponding estimator of Christoffersson, but which demands less computing time.

504 citations


Journal ArticleDOI
van Driel1, P Otto
TL;DR: In this article, some of the most important causes of boundary minima are discussed and illustrated by means of artificial and empirical data, and some of them can be identified by using empirical data.
Abstract: In the applications of maximum likelihood factor analysis the occurrence of boundary minima instead of proper minima is no exception at all. In the past the causes of such improper solutions could not be detected. This was impossible because the matrices containing the parameters of the factor analysis model were kept positive definite. By dropping these constraints, it becomes possible to distinguish between the different causes of improper solutions. In this paper some of the most important causes are discussed and illustrated by means of artificial and empirical data.

194 citations


Journal ArticleDOI
TL;DR: Monte Carlo methods are used to study the ability of nearest available Mahalanobis metric matching to make the means of matching variables more similar in matched samples than in random samples.
Abstract: SUMMARY Monte Carlo methods are used to study the ability of nearest available Mahalanobis metric matching to make the means of matching variables more similar in matched samples than in random samples.

188 citations


Journal ArticleDOI
TL;DR: In this article, four approximate tests are considered for repeated measurement designs in which observations are multivariate normal with arbitrary covariance matrices, and traditional within-subject mean square ratios are compared with critical values derived from F distributions with adjusted degrees of freedom.
Abstract: Four approximate tests are considered for repeated measurement designs in which observations are multivariate normal with arbitrary covariance matrices. In these tests traditional within-subject mean square ratios are compared with critical values derived fromF distributions with adjusted degrees of freedom. Two of them—the ∈ approximate and the improved general approximate (IGA) tests—behave adequately in terms of Type I error. Generally, the IGA test functions better than the ∈ approximate test, however the latter involves less computations. In regards to power, the IGA test may compete with one multivariate procedure when the assumptions of the latter are tenable.

188 citations


Journal ArticleDOI
TL;DR: In this paper, a method for principal components analysis at a variety of scale levels (nominal, ordinal, or interval) is presented, where the variables may be either continuous or discrete.
Abstract: A method is discussed which extends principal components analysis to the situation where the variables may be measured at a variety of scale levels (nominal, ordinal or interval), and where they may be either continuous or discrete. There are no restrictions on the mix of measurement characteristics and there may be any pattern of missing observations. The method scales the observations on each variable within the restrictions imposed by the variable's measurement characteristics, so that the deviation from the principal components model for a specified number of components is minimized in the least squares sense. An alternating least squares algorithm is discussed. An illustrative example is given.

188 citations


Journal ArticleDOI
TL;DR: The U-statistic method is found to be consistently more effective in recovering the original tree structures than either the single-link or complete-link methods.
Abstract: A monotone invariant method of hierarchical clustering based on the Mann-Whitney U-statistic is presented. The effectiveness of the complete-link, single-link, and U-statistic methods in recovering tree structures from error perturbed data are evaluated. The U-statistic method is found to be consistently more effective in recovering the original tree structures than either the single-link or complete-link methods.

147 citations


Journal ArticleDOI
TL;DR: In this paper, an approximate confidence interval for the parameter is presented as a function of the three mean squares of the analysis of variance table summarizing the results: between subjects, between raters, and error.
Abstract: When the raters participating in a reliability study are a random sample from a larger population of raters, inferences about the intraclass correlation coefficient must be based on the three mean squares from the analysis of variance table summarizing the results: between subjects, between raters, and error. An approximate confidence interval for the parameter is presented as a function of these three mean squares.

144 citations



Journal ArticleDOI
Dag Sörbom1
TL;DR: In this paper, a general statistical model for simultaneous analysis of data from several groups is described, which is primarily designed to be used for the analysis of covariance, and can handle any number of covariates and criterion variables.
Abstract: A general statistical model for simultaneous analysis of data from several groups is described. The model is primarily designed to be used for the analysis of covariance. The model can handle any number of covariates and criterion variables, and any number of treatment groups. Treatment effects may be assessed when the treatment groups are not randomized. In addition, the model allows for measurement errors in the criterion variables as well as in the covariates. A wide variety of hypotheses concerning the parameters of the model can be tested by means of a large sample likelihood ratio test. In particular, the usual assumptions of ANCOVA may be tested.

Journal ArticleDOI
TL;DR: In this article, several ways of using the traditional analysis of variance to test heterogeneity of spread in factorial designs with equal or unequal n are compared using both theoretical and Monte Carlo results, and two types of spread variables, (1) the jackknife pseudovalues ofs 2 and (2) the absolute deviations from the cell median, are shown to be robust and relatively powerful.
Abstract: Several ways of using the traditional analysis of variance to test heterogeneity of spread in factorial designs with equal or unequaln are compared using both theoretical and Monte Carlo results. Two types of spread variables, (1) the jackknife pseudovalues ofs 2 and (2) the absolute deviations from the cell median, are shown to be robust and relatively powerful. These variables seem to be generally superior to the Z-variance and Box-Scheffe procedures.

Journal ArticleDOI
TL;DR: In this article, a family of models for the representation and assessment of individual differences for multivariate data is embodied in a hierarchically organized and sequentially applied procedure called PINDIS.
Abstract: A family of models for the representation and assessment of individual differences for multivariate data is embodied in a hierarchically organized and sequentially applied procedure called PINDIS. The two principal models used for directly fitting individual configurations to some common or hypothesized space are the dimensional salience and perspective models. By systematically increasing the complexity of transformations one can better determine the validities of the various models and assess the patterns and commonalities of individual differences. PINDIS sheds some new light on the interpretability and general applicability of the dimension weighting approach implemented by the commonly used INDSCAL procedure.

Journal ArticleDOI
TL;DR: In this paper, a general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints, and a technique is described for displaying Bayesian conditional credibility regions for any sample size.
Abstract: Techniques are developed for surrounding each of the points in a multidimensional scaling solution with a region which will contain the population point with some level of confidence. Bayesian credibility regions are also discussed. A general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints. This theorem is applied to a number of models to display asymptotic variance-covariance matrices for coordinate estimates under different rotational constraints. A technique is described for displaying Bayesian conditional credibility regions for any sample size.

Journal ArticleDOI
TL;DR: In this paper, the effects of additional variables on factor indeterminacy in a model with a single common factor were discussed and conclusions from them for factor theory in general were drawn.
Abstract: “Determinate” solutions for the indeterminate common factor ofp variables satisfying the single common factor model are not unique. Therefore an infinite sequence of additional variables that conform jointly with the originalp variables to the original single common factor model does not determine a unique solution for the indeterminate factor of thep variables (although the solution is unique for the factor of the infinite sequence). Other infinite sequences may be found to determine different solutions for the factor of the originalp variables. The paper discusses a number of theorems about the effects of additional variables on factor indeterminacy in a model with a single common factor and draws conclusions from them for factor theory in general.

Journal ArticleDOI
TL;DR: In this article, two designs for comparing a judge's ratings with a known standard are presented and compared and the probability distribution of the total number of correct choices is developed in each case.
Abstract: Two designs for comparing a judge's ratings with a known standard are presented and compared. Design A pertains to the situation where the judge is asked to categorize each ofN subjects into one ofr (known) classes with no knowledge of the actual number in each class. Design B is employed when the judge is given the actual number in each class and is asked to categorize the individuals subject to these constraints. The probability distribution of the total number of correct choices is developed in each case. A power comparison of the two procedures is undertaken.

Journal ArticleDOI
TL;DR: In this article, a gradient method is used to obtain least squares estimates of parameters of them-dimensional euclidean model simultaneously in N spaces, given the observation of all pairwise distances ofn stimuli for each space.
Abstract: A gradient method is used to obtain least squares estimates of parameters of them-dimensional euclidean model simultaneously inN spaces, given the observation of all pairwise distances ofn stimuli for each space. The procedure can estimate an additive constant as well as stimulus projections and the metric of the reference axes of the configuration in each space. Each parameter in the model can be fixed to equal some a priori value, constrained to be equal to any other parameter, or free to take on any value in the parameter space. Two applications of the procedure are described.

Journal ArticleDOI
TL;DR: In this article, Guttman's λ2 and Cronbach's coefficient alpha lower bounds are shown to be terms of an infinite series of lower bounds, and all terms of this series are equal to the reliability if and only if the test is composed of items which are essentially tauequivalent.
Abstract: Two well-known lower bounds to the reliability in classical test theory, Guttman's λ2 and Cronbach's coefficient alpha, are shown to be terms of an infinite series of lower bounds. All terms of this series are equal to the reliability if and only if the test is composed of items which are essentially tau-equivalent. Some practical examples, comparing the first 7 terms of the series, are offered. It appears that the second term (λ2) is generally worth-while computing as an improvement of the first term (alpha) whereas going beyond the second term is not worth the computational effort. Possibly an exception should be made for very short tests having widely spread absolute values of covariances between items. The relationship of the series and previous work on lower bound estimates for the reliability is briefly discussed.

Journal ArticleDOI
TL;DR: In this paper, a permutation distribution and an associated significance test are developed for the specific hypothesis of "no conformity" reinterpreted as a random matching of the rows and columns of one sociometric matrix to the columns of a second.
Abstract: The problem of comparing two sociometric matrices, as originally discussed by Katz and Powell in the early 1950's, is reconsidered and generalized using a different inference model. In particular, the proposed indices of conformity are justified by a regression argument similar to the one used by Somers in presenting his well-known measures of asymmetric ordinal association. A permutation distribution and an associated significance test are developed for the specific hypothesis of “no conformity” reinterpreted as a random matching of the rows and (simultaneously) the columns of one sociometric matrix to the rows and columns of a second. The approximate significance tests that are presented and illustrated with a simple numerical example are based on the first two moments of the permutation distribution, or alternatively, on a random sample from the complete distribution.

Journal ArticleDOI
TL;DR: This paper describes a computational method for weighted euclidean distance scaling which combines aspects of an “analytic” solution with an approach using loss functions, and gives essentially the same solutions as INDSCAL for two moderate-size data sets tested.
Abstract: This paper describes a computational method for weighted euclidean distance scaling which combines aspects of an “analytic” solution with an approach using loss functions. We justify this new method by giving a simplified treatment of the algebraic properties of a transformed version of the weighted distance model. The new algorithm is much faster than INDSCAL yet less arbitrary than other “analytic” procedures. The procedure, which we call SUMSCAL (subjectivemetricscaling), gives essentially the same solutions as INDSCAL for two moderate-size data sets tested.

Journal ArticleDOI
TL;DR: In this paper, a special case of Bloxom's version of Tucker's three-mode model is developed statistically, and a distinction is made between modes in terms of whether they are fixed or random.
Abstract: A special case of Bloxom's version of Tucker's three-mode model is developed statistically. A distinction is made between modes in terms of whether they are fixed or random. Parameter matrices are associated with the fixed modes, while no parameters are associated with the mode representing random observation vectors. The identification problem is discussed, and unknown parameters of the model are estimated by a weighted least squares method based upon a Gauss-Newton algorithm. A goodness-of-fit statistic is presented. An example based upon self-report and peer-report measures of personality shows that the model is applicable to real data. The model represents a generalization of Thurstonian factor analysis; weighted least squares estimators and maximum likelihood estimators of the factor model can be obtained using the proposed theory.

Journal ArticleDOI
TL;DR: In this article, Monte Carlo methods are used to study the efficacy of multivariate matched sampling and regression adjustment for controlling bias due to specific matching variables when dependent variables are moderately nonlinear in.
Abstract: Monte Carlo methods are used to study the efficacy of multivariate matched sampling and regression adjustment for controlling bias due to specific matching variables when dependent variables are moderately nonlinear in . The general conclusion is that nearest available Mahalanobis metric matching in combination with regression adjustment on matched pair differences is a highly effective plan for controlling bias due to .

Journal ArticleDOI
TL;DR: In this article, it was shown that two models cannot exist with essentially unique but different scores for the same common factors for any two common factors, and it was also shown that the existence of such models with non-unique and hence indeterminate scores cannot be guaranteed.
Abstract: A factor analysis model consists of a random sequence of variates defined on a probability space and satisfying the usual descriptive equations of the common-factor analysis in which the common-factor scores are dimensionally independent. Necessary and sufficient conditions are given for a model to exist with essentially unique and hence determinate common factor scores. Parallel results are given for the existence of models with nonunique and hence indeterminate scores. It is then proved that two models cannot exist with essentially unique but different scores for the same common factors. The meaning and application of these results are discussed.

Journal ArticleDOI
TL;DR: In this article, a new coordinate estimation routine was proposed for ALSCAL, and an oversight in the interval measurement level case has been found and corrected; and a new initial configuration routine is superior to the original.
Abstract: It is reported that (1) a new coordinate estimation routine is superior to that originally proposed for ALSCAL; (2) an oversight in the interval measurement level case has been found and corrected; and (3) a new initial configuration routine is superior to the original.

Journal ArticleDOI
TL;DR: In this paper, a method is given for constructing reflections to preserve specified rows and columns, and sufficient conditions are stated for the existence of 2 × 2 orthogonally equivalent matrices when the appropriate k − 1/2 elements have been specified.
Abstract: Under mild assumptions, when appropriate elements of a factor loading matrix are specified to be zero, all orthogonally equivalent matrices differ at most by column sign changes. Here a variety of results are given for the more complex case when the specified values are not necessarily zero. A method is given for constructing reflections to preserve specified rows and columns. When the appropriatek(k − 1)/2 elements have been specified, sufficient conditions are stated for the existence of 2 k orthogonally equivalent matrices.

Journal ArticleDOI
TL;DR: In this paper, a formulation of the optimal scaling approach is presented, which is different from Guttman's, and it provides identical scale values for stimulus scale values, and scores for the subjects, which indicate the degrees of response consistency (transitivity) relative to the optimum solution.
Abstract: A formulation, which is different from Guttman's is presented. The two formulations are both called the optimal scaling approach, and are proven to provide identical scale values. The proposed formulation has at least two advantages over Guttman's. Namely, (i) the former serves to clarify close relations of the optimal scaling approach to those of Slater and the vector model of preferential choice, and (ii) in addition to the stimulus scale values, it provides scores for the subjects, which indicate the degrees of response consistency (transitivity), relative to the optimum solution. The method is assumption-free and capable of multidimensional analysis.

Journal ArticleDOI
TL;DR: In this article, it is shown that scale invariance of the loss function is neither a necessary nor a sufficient condition for scale freeness in the orthogonal factor model.
Abstract: The notion of scale freeness does not seem to have been well understood in the factor analytic literature. It has been believed that if the loss function that is minimized to obtain estimates of the parameters in the factor model is scale invariant, then the estimates are scale free. It is shown that scale invariance of the loss function is neither a necessary nor a sufficient condition for scale freeness. A theorem that ensures scale freeness in the orthogonal factor model is given in this paper.

Journal ArticleDOI
TL;DR: In this article, a review of short-term instruction and ITI for the SAT-Mathematics and SAT-Verbal was conducted, with the focus on test-taking confidence and efficiency, and examinee, item, and instructional characteristics.
Abstract: The research literature on short-term instruction (STI) and intermediate-term instruction (ITI) for the SAT-Mathematics and SAT-Verbal was reviewed. Selected studies of STI and ITI for tests other than the SAT-M and SAT-V, and of testwiseness (TW), per se, were included in the survey if they were judged relevant to the question of special instruction for the SAT. The research studies were reviewed and interpreted within the framework of a score components model which posited four content-related and two TW score components, as well as two secondary ones (test-taking confidence and efficiency), that are theoretically subject to STI and ITI effects. In addition, examinee, item, and instructional characteristics were considered, as these relate to the score components model. Basic discrepancies between negative and positive findings were noted for both the SAT-M and the SAT-V. These were generally resolved in favor of recognizing meaningful STI effects for the SAT-M, but remain unresolved for the SAT-V. Recommendations were made for SAT-M and SAT-V research allowing STI effects to be partitioned according to examinee, item, and instructional characteristics, as these apply to selected test score components.

Journal ArticleDOI
TL;DR: In this article, a simple method is developed for displaying the information that the data set contains about the correlational structure of the new tests, even though each subject takes only one new test.
Abstract: Suppose a collection of standard tests is given to all subjects in a random sample, but a different new test is given to each group of subjects in nonoverlapping subsamples. A simple method is developed for displaying the information that the data set contains about the correlational structure of the new tests. This is possible to some extent, even though each subject takes only one new test. The method uses plausible values of the partial correlations among the new tests given the standard tests in order to generate plausible simple correlations among the new tests and plausible multiple correlations between composites of the new tests and the standard tests. The real data example included suggests that the method can be useful in practical problems.