scispace - formally typeset
Search or ask a question
Author

Louis Guttman

Bio: Louis Guttman is an academic researcher from Hebrew University of Jerusalem. The author has contributed to research in topics: Population & Facet (geometry). The author has an hindex of 42, co-authored 81 publications receiving 9638 citations. Previous affiliations of Louis Guttman include Society of American Military Engineers & Cornell University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the Spearman-Thurstone type of multiple common-factor structure cannot hold for the infinite universe of content from which the sample of observed variables is selected.
Abstract: LetR be any correlation matrix of ordern, with unity as each main diagonal element. Common-factor analysis, in the Spearman-Thurstone sense, seeks a diagonal matrixU 2 such thatG = R − U 2 is Gramian and of minimum rankr. Lets 1 be the number of latent roots ofR which are greater than or equal to unity. Then it is proved here thatr ≧s 1. Two further lower bounds tor are also established that are better thans 1. Simple computing procedures are shown for all three lower bounds that avoid any calculations of latent roots. It is proved further that there are many cases where the rank of all diagonal-free submatrices inR is small, but the minimum rankr for a GramianG is nevertheless very large compared withn. Heuristic criteria are given for testing the hypothesis that a finiter exists for the infinite universe of content from which the sample ofn observed variables is selected; in many cases, the Spearman-Thurstone type of multiple common-factor structure cannot hold.

1,492 citations

Journal ArticleDOI
TL;DR: In this article, a general coefficient of monotonicity, whose maximization is equivalent to optimal satisfaction of the Monotonicity condition, is defined, and which allows various options both for treatment of ties and for weighting error-of-fit.
Abstract: LetA 1,A 2, ...,A n be anyn objects, such as variables, categories, people, social groups, ideas, physical objects, or any other. The empirical data to be analyzed are coefficients of similarity or distance within pairs (A i,A i ), such as correlation coefficients, conditional probabilities or likelihoods, psychological choice or confusion, etc. It is desired to represent these data parsimoniously in a coordinate space, by calculatingm coordinates {x ia } for eachA i for a semi-metricd of preassigned formd ij =d(|x i1 -x j1 |, |x i2 -x j2|, ..., |x im -x jm |). The dimensionalitym is sought to be as small as possible, yet satisfy the monotonicity condition thatd ij

1,310 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a new approach for quantifying qualitative data in the social and psychological sciences, which seems to afford an adequate basis for quantification qualitative data and is used successfully for the past year or so in investigating morale and other problems in the United States Army by the Research Branch of the Morale Services Division of the Army Service Forces.
Abstract: IN A GREAT deal of research in the social and psychological sciences, interest lies in certain large classes of qualitative observations. For example, research in marriage is concerned with a class of qualitative behavior called marital adjustment, which includes an indefinitely large number of interactions between husband and wife. Public opinion research is concerned with large classes of behavior like expressions of opinion by Americans about the fighting ability of the British. Educational psychology deals with large classes of behavior like achievement tests. It is often desired in such areas to be able to summarize data by saying, for example, that one marital couple is better adjusted than another marital couple, or that one person has a better opinion of the British than has another person, or that one student has a greater knowledge of arithmetic than has another student. There has been considerable discussion concerning the utility of such orderings of persons. It is not our intention in this paper to review such discussions, but instead to present a rather new approach to the problem which seems to afford an adequate basis for quantifying qualitative data. This approach has been used successfully for the past year or so in investigating morale and other problems in the United States Army by the Research Branch of the Morale Services Division of the Army Service Forces. While this approach to quantification leads to some interesting mathematics, no knowledge of this mathematics is required in actually analyzing data. Simple routines have been established which require no knowledge of statistics, which take less time than the various manipulations now used by various

1,216 citations

Journal ArticleDOI
Louis Guttman1
TL;DR: Lower bounds to the reliability coefficient that can be computed from but asingle trial are developed, to avoid the experimental difficulties of making two independent trials.
Abstract: Three sources of variation in experimental results for a test are distinguished: trials, persons, and items. Unreliability is defined only in terms of variation over trials. This definition leads to a more complete analysis than does the conventional one; Spearman's contention is verified that the conventional approach—which was formulated by Yule—introduces unnecessary hypotheses. It is emphasized that at least two trials are necessary to estimate the reliability coefficient. This paper is devoted largely to developinglower bounds to the reliability coefficient that can be computed from but asingle trial; these avoid the experimental difficulties of making two independent trials. Six different lower bounds are established, appropriate for different situations. Some of the bounds are easier to compute than are conventional formulas, and all the bounds assume less than do conventional formulas. The terminology used is that of psychological and sociological testing, but the discussion actually provides a general analysis of the reliability of the sum ofn variables.

927 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test, therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test.
Abstract: A general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test. α is therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test. α is found to be an appropriate index of equivalence and, except for very short tests, of the first-factor concentration in the test. Tests divisible into distinct subtests should be so divided before using the formula. The index $$\bar r_{ij} $$ , derived from α, is shown to be an index of inter-item homogeneity. Comparison is made to the Guttman and Loevinger approaches. Parallel split coefficients are shown to be unnecessary for tests of common types. In designing tests, maximum interpretability of scores is obtained by increasing the first-factor concentration in any separately-scored subtest and avoiding substantial group-factor clusters within a subtest. Scalability is not a requisite.

37,235 citations

Journal ArticleDOI
TL;DR: In this article, a general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models, and the importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models.
Abstract: Factor analysis, path analysis, structural equation modeling, and related multivariate statistical methods are based on maximum likelihood or generalized least squares estimation developed for covariance structure models. Large-sample theory provides a chi-square goodness-of-fit test for comparing a model against a general alternative model based on correlated variables. This model comparison is insufficient for model evaluation: In large samples virtually any model tends to be rejected as inadequate, and in small samples various competing models, if evaluated, might be equally acceptable. A general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models. Use of the null model in the context of a procedure that sequentially evaluates the statistical necessity of various sets of parameters places statistical methods in covariance structure analysis into a more complete framework. The concepts of ideal models and pseudo chi-square tests are introduced, and their roles in hypothesis testing are developed. The importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models is also emphasized. Normed and nonnormed fit indices are developed and illustrated.

16,420 citations

Journal ArticleDOI
TL;DR: Two scales first standardized on their own population are presented, one of which taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence, and the other taps a schema of competence into which these behaviors fit.
Abstract: THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-

14,832 citations

Journal ArticleDOI
TL;DR: The Scree Test for the Number Of Factors this paper was first proposed in 1966 and has been used extensively in the field of behavioral analysis since then, e.g., in this paper.
Abstract: (1966). The Scree Test For The Number Of Factors. Multivariate Behavioral Research: Vol. 1, No. 2, pp. 245-276.

12,228 citations