scispace - formally typeset
Search or ask a question
Topic

Intra-rater reliability

About: Intra-rater reliability is a research topic. Over the lifetime, 2073 publications have been published within this topic receiving 140968 citations.


Papers
More filters
Book ChapterDOI
28 Feb 2008

12 citations

Journal ArticleDOI
TL;DR: This study evaluated a new experimental software to compare 66 student-produced tooth wax-ups at one U.S. dental school to an ideal standard after both had been digitally scanned and showed that a tolerance level of 450 μm provided 96% agreement of grades compared with only 53% agreement for faculty.
Abstract: Traditionally, evaluating student work in preclinical courses has relied on the judgment of experienced clinicians utilizing visual inspection. However, research has shown significant disagreement between different evaluators (interrater reliability) and between results from the same evaluator at different times (intrarater reliability). This study evaluated a new experimental software (E4D Compare) to compare 66 student-produced tooth wax-ups at one U.S. dental school to an ideal standard after both had been digitally scanned. Using 3D surface-mapping technology, a numerical evaluation was generated by calculating the surface area of the student’s work that was within a set range of the ideal. The aims of the study were to compare the reliability of faculty and software grades and to determine the ideal tolerance value for the software. The investigators hypothesized that the software would provide more consistent feedback than visual grading and that a tolerance value could be determined that closely correlated with the faculty grade. The results showed that a tolerance level of 450μm provided 96% agreement of grades compared with only 53% agreement for faculty. The results suggest that this software could be used by faculty members as a mechanism to evaluate student work and for students to use as a self-assessment tool.

12 citations

Journal ArticleDOI
TL;DR: Tests demonstrated good reliability and measurement precision, although ICCs and SEMs were different between limbs, and tests were correlated, but only one-third of the variance was shared between tests.

12 citations

Journal ArticleDOI
TL;DR: The purpose of this study was to determine the interrater and intrarater reliability of two experienced physical therapists in the identification of the type of end feel for elbow flexion and extension when using the Paris classification system of normal end feel.
Abstract: Identification of the anatomic and pathoanatomic structure which limits the range of motion of a joint help to determine the need for and type of treatment approach. The purpose of this study was to determine the interrater and intrarater reliability of two experienced physical therapists in the identification of the type of end feel for elbow flexion and extension when using the Paris classification system of normal end feel. Four trials of flexion end feel and four trials of extension end feel were conducted for each of the twenty subjects by each blindfolded examiner. A total of 160 movements were performed by each examiner. The intertester Kappa value for interrater reliability for flexion was .40 and for extension was .73 with a significance of p < .0001 for both flexion and extension. Intrarater agreement was measured by percent comparison. Examiner A demonstrated 80% agreement for flexion and 79% for extension. Examiner B showed 75% agreement for flexion and 78% for extension. This study d...

12 citations


Network Information
Related Topics (5)
Rehabilitation
46.2K papers, 776.3K citations
69% related
Ankle
30.4K papers, 687.4K citations
68% related
Systematic review
33.3K papers, 1.6M citations
68% related
Activities of daily living
18.2K papers, 592.8K citations
68% related
Validity
13.8K papers, 776K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
202278
202186
202083
201986
201867