scispace - formally typeset
Search or ask a question
Topic

Intra-rater reliability

About: Intra-rater reliability is a research topic. Over the lifetime, 2073 publications have been published within this topic receiving 140968 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This study demonstrates the criterion validity and reliability of remote musculoskeletal assessments of the ankle joint complex using telerehabilitation.
Abstract: Background and Purpose. Musculoskeletal injuries are the most common source of chronic pain and disability. The ankle joint is the most common of these injuries and without adequate rehabilitation function can be severely impaired. Access to physiotherapy rehabilitation services can be limited due to geographical remoteness and a shortage of services in rural and remote areas. Telerehabilitation is a potential solution to bridge this service delivery gap. The aim of this study was to determine the criterion validity and reliability of conducting a remote musculoskeletal assessment of the ankle joint complex using telerehabilitation technologies compared with a face-to-face assessment. Methods. This study utilized a repeated measures design to assess 15 subjects (mean age 24.5, SD 10.8 years) presenting with ankle pain. Conventional face-to-face assessments were compared with assessments performed via a telerehabilitation system. Results. A similar agreement of 93.3% in patho-anatomical diagnosis and an 80% exact agreement (chi(2) = 4.267; p < 0.04) in primary systems diagnosis was found between face-to-face and telerehabilitation assessments. Clinical observations were found to have very strong agreement (k = 0.92) for categorical data and significant agreement (93.3% agreement;chi(2) = 234.4; p < 0.001) for binary data. A high level of inter-and intrarater reliability was found for the telerehabilitation assessments. Conclusions. This study demonstrates the criterion validity and reliability of remote musculoskeletal assessments of the ankle joint complex using telerehabilitation. Copyright (C) 2010 John Wiley & Sons, Ltd.

64 citations

Journal ArticleDOI
01 Mar 1994-Sleep
TL;DR: The present paper has three major objectives: first, to document the reliability of a published criteria set for sleep/wake scoring in the rat; second, to develop a computer algorithm implementation of the criteria set; and third, to documents the reliability and functional validity of the computer algorithm for sleep /wake scoring.
Abstract: The present paper has three major objectives: first, to document the reliability of a published criteria set for sleep/wake scoring in the rat; second, to develop a computer algorithm implementation of the criteria set; and third, to document the reliability and functional validity of the computer algorithm for sleep/wake scoring. The reliability of the visual criteria was assessed by letting two raters separately score 8 hours of polygraph records from the light period from five rats (14,040 10-second scoring epochs). Scored stages were waking, slow-wave sleep-1, slow-wave sleep-2, transition type sleep and rapid eye movement (REM) sleep. The visual criteria had good interrater reliability [Cohen's kappa (kappa) = 0.68], with 92.6% agreement on the waking/nonrapid eye movement (NREM) sleep/REM sleep distinction (kappa = 0.89). This indicated that the criteria allow separate raters to independently classify sleep/wake stages with very good agreement. An independent group of 10 rats was used for development of an algorithm for semiautomatic computer scoring. A close implementation of the visual criteria was chosen. The algorithm was based on power spectral densities from two electroencephalogram (EEG) leads and on electromyogram (EMG) activity. Five 2-second fast Fourier transform (FFT) epochs from each EEG/EMG lead per 10-second sleep/wake scoring epoch were used to take the spatial and temporal context into account. The same group of five rats used in visual scoring was used to appraise reliability of computerized scoring. The computer score was compared with the visual score for each rater. There was a lower agreement (kappa = 0.57 and 0.62 for the two raters) than in interrater visual scoring [percent agreement 87.7 and 89.1% (kappa = 0.82 and 0.84) in the waking/NREM sleep/REM sleep distinction]. Subsequently, the computer scores of the raters were compared. The interrater reliability was better than the interrater reliability for visual scoring (kappa = 0.75), with 92.4% agreement for the waking/NREM sleep/REM sleep distinction (kappa = 0.89). The computer scoring algorithm was applied to data from a third independent group of rats (n = 6) from an acoustical stimulus arousal threshold experiment, to assess the functional validity of the scoring directly with respect to arousal threshold. The computer algorithm scoring performed as well as the original visual sleep/wake stage scoring. This indicated that the lower intrarater reliability did not have a significant negative influence on the functional validity of the sleep/wake score.

64 citations

Journal ArticleDOI
TL;DR: In this paper, the authors outline important factors to consider in test-retest reliability analyses, common errors, and some initial methods for conducting and reporting reliability analyses to avoid such errors.
Abstract: Psychological research and clinical practice rely heavily on psychometric testing for measuring psychological constructs that represent symptoms of psychopathology, individual difference characteristics, or cognitive profiles Test-retest reliability assessment is crucial in the development of psychometric tools, helping to ensure that measurement variation is due to replicable differences between people regardless of time, target behavior, or user profile While psychological studies testing the reliability of measurement tools are pervasive in the literature, many still discuss and assess this form of reliability inappropriately with regard to the specified aims of the study or the intended use of the tool The current paper outlines important factors to consider in test-retest reliability analyses, common errors, and some initial methods for conducting and reporting reliability analyses to avoid such errors The paper aims to highlight a persistently problematic area in psychological assessme

64 citations

Journal ArticleDOI
TL;DR: A novel rating scale for classification of brain structural magnetic resonance imaging in children with cerebral palsy is described and its interrater and intrarater reliability is assessed.
Abstract: Aim To describe the development of a novel rating scale for classification of brain structural magnetic resonance imaging (MRI) in children with cerebral palsy (CP) and to assess its interrater and intrarater reliability. Method The scale consists of three sections. Section 1 contains descriptive information about the patient and MRI. Section 2 contains the graphical template of brain hemispheres onto which the lesion is transposed. Section 3 contains the scoring system for the quantitative analysis of the lesion characteristics, grouped into different global scores and subscores that assess separately side, regions, and depth. A larger interrater and intrarater reliability study was performed in 34 children with CP (22 males, 12 females; mean age at scan of 9y 5mo [SD 3y 3mo], range 4y–16y 11mo; Gross Motor Function Classification System level I, [n=22], II [n=10], and level III [n=2]). Results Very high interrater and intrarater reliability of the total score was found with indices above 0.87. Reliability coefficients of the lobar and hemispheric subscores ranged between 0.53 and 0.95. Global scores for hemispheres, basal ganglia, brain stem, and corpus callosum showed reliability coefficients above 0.65. Interpretation This study presents the first visual, semi-quantitative scale for classification of brain structural MRI in children with CP. The high degree of reliability of the scale supports its potential application for investigating the relationship between brain structure and function and examining treatment response according to brain lesion severity in children with CP.

64 citations


Network Information
Related Topics (5)
Rehabilitation
46.2K papers, 776.3K citations
69% related
Ankle
30.4K papers, 687.4K citations
68% related
Systematic review
33.3K papers, 1.6M citations
68% related
Activities of daily living
18.2K papers, 592.8K citations
68% related
Validity
13.8K papers, 776K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
202278
202186
202083
201986
201867