Topic
Spearman–Brown prediction formula
About: Spearman–Brown prediction formula is a research topic. Over the lifetime, 108 publications have been published within this topic receiving 36818 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, a general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test, therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test.
Abstract: A general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test. α is therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test. α is found to be an appropriate index of equivalence and, except for very short tests, of the first-factor concentration in the test. Tests divisible into distinct subtests should be so divided before using the formula. The index
$$\bar r_{ij} $$
, derived from α, is shown to be an index of inter-item homogeneity. Comparison is made to the Guttman and Loevinger approaches. Parallel split coefficients are shown to be unnecessary for tests of common types. In designing tests, maximum interpretability of scores is obtained by increasing the first-factor concentration in any separately-scored subtest and avoiding substantial group-factor clusters within a subtest. Scalability is not a requisite.
37,235 citations
••
TL;DR: There is some disagreement, however, what the most appropriate indicator of scale reliability is when a measure is composed of two items and the most frequently reported reliability statistic for multiple-item scales is Cronbach's coefficient alpha.
Abstract: Rob Eisinga, Manfred te Grotenhuis, Ben Pelzer Department of Social Science Research Methods and Department of Sociology, Radboud University Nijmegen, PO Box 9104, 6500 HE Nijmegen, The Netherlands October 8 2012 To obtain reliable measures researchers prefer multiple-item questionnaires rather than single-item tests. Multiple-item questionnaires may be costly however and time-consuming for participants to complete. They therefore frequently administer two-item measures, the reliability of which is commonly assessed by computing a reliability coefficient. There is some disagreement, however, what the most appropriate indicator of scale reliability is when a measure is composed of two items. The most frequently reported reliability statistic for multiple-item scales is Cronbach’s coefficient alpha and many researchers report this coefficient for their two-item measure
1,584 citations
••
TL;DR: In this article, the authors proposed a population definition of CoEfficient KCC as a measure of diagnostic reliability in characterizing an individual, and the effect of reliability, as measured by KCC, on estimation bias, precision, and test power.
Abstract: Coefficientκ is generally defined in terms of procedures of computation rather than in terms of a population. Here a population definition is proposed. On this basis, the interpretation ofκ as a measure of diagnostic reliability in characterizing an individual, and the effect of reliability, as measured byκ, on estimation bias, precision, and test power are examined. Factors influencing the magnitude ofκ are identified. Strategies to improve reliability are proposed, including that of combining multiple unreliable diagnoses.
289 citations
••
TL;DR: In this article, the authors extend the usual approach to the assessment of test or rater reliability to situations that have previously not been appropriate for the application of this standard (Spearman-Brown) approach.
Abstract: The authors extend the usual approach to the assessment of test or rater reliability to situations that have previously not been appropriate for the application of this standard (Spearman-Brown) approach. Specifically, the authors (a) provide an accurate overall estimate of the reliability of a test (or a panel of raters) comprising 2 or more different kinds of items (or raters), a quite common situation, and (b) provide a simple procedure for constructing the overall instrument when it comprises 2 or more kinds of items, judges, or raters, each with its own costs and its own reliabilities.
126 citations
••
TL;DR: This paper aims to increase insight into reliability studies by pointing to the assumptions of reliability coefficient, similarities between various coefficients, and the subsequent new applications of reliability coefficients.
120 citations