scispace - formally typeset
Search or ask a question
Topic

Intra-rater reliability

About: Intra-rater reliability is a research topic. Over the lifetime, 2073 publications have been published within this topic receiving 140968 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The present article will discuss in detail the needs for a maze reliable for measuring individual differences and the relative validities of the different maze scores.
Abstract: The following discussion and experiments arose from the attempt at an investigation of the inheritance of maze-learning ability in rats. (See the preliminary report on the latter by Tolman (15).) In the course of that investigation it soon became evident that a maze reliable for measuring individual differences must be discovered and also that some decision must be made as to the relative validities of the different possible maze-scores, i.e., number of blind-entrances^ number of retracings, time, and number of perfect runs. The present article will discuss in detail these two needs. Part I will deal with reliability; and Part II with the relative validities of the different maze scores.

44 citations

Journal ArticleDOI
TL;DR: Trained reviewers can reliably assess paediatric inpatient medication related events for the presence of an ADE and for its seriousness and preventability, according to a decision algorithm and six point scale.
Abstract: Background: In medication safety research studies medication related events are often classified by type, seriousness, and degree of preventability, but there is currently no universally reliable “gold standard” approach. The reliability (reproducibility) of this process is important as the targeting of prevention strategies is often based on specific categories of event. The aim of this study was to determine the reliability of reviewer judgements regarding classification of paediatric inpatient medication related events. Methods: Three health professionals independently reviewed suspected medication related events and classified them by type (adverse drug event (ADE), potential ADE, medication error, rule violation, or other event). ADEs and potential ADEs were then rated according to seriousness of patient injury using a seven point scale and preventability using a decision algorithm and a six point scale. Inter- and intra-rater reliabilities were calculated using the kappa (κ) statistic. Results: Agreement between all three reviewers regarding event type ranged from “slight” for potential ADEs (κ = 0.20, 95% CI 0.00 to 0.40) to “substantial” agreement for the presence of an ADE (κ = 0.73, 95% CI 0.69 to 0.77). Agreement ranged from “slight” (κ = 0.06, 95% CI 0.02 to 0.10) to “fair” (κ = 0.34, 95% CI 0.30 to 0.38) for seriousness classifications but, by collapsing the seven categories into serious versus not serious, “moderate” agreement was found (κ = 0.50, 95% CI 0.46 to 0.54). For preventability decision, overall agreement was “fair” (κ = 0.37, 95% CI 0.33 to 0.41) but “moderate” for not preventable events (κ = 0.47, 95% CI 0.43 to 0.51). Conclusion: Trained reviewers can reliably assess paediatric inpatient medication related events for the presence of an ADE and for its seriousness. Assessments of preventability appeared to be a more difficult judgement in children and approaches that improve reliability would be useful.

44 citations

Journal ArticleDOI
TL;DR: The standardized clinical tests exhibited moderate to substantial reliability in patients with axial neck pain referred for diagnostic facet joint blocks, and the incorporation of these tests into a clinical prediction model to screen patients before referral for diagnostic facets joint blocks is justified.

44 citations

Journal ArticleDOI
TL;DR: The EFIP is a reliable and valid instrument to evaluate the effect of physical activity on frailty in research and in clinical practice and will be assessed or reassessed in a larger study population.
Abstract: Background Physical activity is assumed to be important in the prevention and treatment of frailty. It is unclear, however, to what extent frailty can be influenced because instruments designed to assess frailty have not been validated as evaluative outcome instruments in clinical practice. Objectives The aims of this study were: (1) to develop a frailty index (ie, the Evaluative Frailty Index for Physical Activity [EFIP]) based on the method of deficit accumulation and (2) to test the clinimetric properties of the EFIP. Design The content of the EFIP was determined using a written Delphi procedure. Intrarater reliability, interrater reliability, and construct validity were determined in an observational study (n=24). Method Intrarater reliability and interrater reliability were calculated using Cohen kappa and intraclass correlation coefficients (ICCs). Construct validity was determined by correlating the score on the EFIP with those on the Timed “Up & Go” Test (TUG), the Performance-Oriented Mobility Assessment (POMA), and the Cumulative Illness Rating Scale for Geriatrics (CIRS-G). Results Fifty items were included in the EFIP. Interrater reliability (Cohen kappa=0.72, ICC=.96) and intrarater reliability (Cohen kappa=0.77 and 0.80, ICC=.93 and .98) were good. As expected, a fair to moderate correlation with the TUG, POMA, and CIRS-G was found (.61, −.70, and .66, respectively). Limitations Reliability and validity of the EFIP have been tested in a small sample. These and other clinimetric properties, such as responsiveness, will be assessed or reassessed in a larger study population. Conclusion The EFIP is a reliable and valid instrument to evaluate the effect of physical activity on frailty in research and in clinical practice.

44 citations

Journal ArticleDOI
TL;DR: Preliminary data is provided on the reliability and validity of dysphonic patients rating their own voice quality and on the basis of this data, patients believe the voice quality ratings are reliable and valid.
Abstract: Objectives: To provide preliminary data on the reliability and validity of dysphonic patients rating their own voice quality. Design: Prospective reliability/validity assessment of voice ratings in dysphonic patients. Setting: The Royal Free Hampstead NHS Primary Care Trust. Participants: Thirty-five adult dysphonia patients recruited from ENT referrals to a speech and language therapy department. Exclusion criteria were (i) a hearing impairment which may affect auditory discrimination and (ii) a diagnosis of cognitive impairment which may affect task comprehension. Main outcome measures: Patient intra-rater reliability was assessed by test–retest ratings, using G (Grade), R (Rough), B (Breathy), A (Asthenic), S (Strained) (GRBAS). Validity was assessed by comparing (i) patient–clinician inter-rater reliability, (ii) patients’ GRBAS ratings with their Vocal Performance Questionnaire (VPQ) responses. Result: (i) Patients had lower intrarater reliability than clinicians (for G of GRBAS, kappa = 0.51 versus 0.74); (ii) patients consistently rated their voices more severely than clinicians (for G of GRBAS, mean rating = 1.4 versus 1.0); (iii) clinician–patient inter-rater agreement was no better than chance (paired t-test, all P 0.4, P < 0.05). Conclusions: Patients appear to have good validity and consistency using GRBAS as a self-perception tool. However, validity measured in terms of agreement with clinician ratings is poor. Voice patients may rate what they perceive rather than what they hear. Disagreement between patient and clinician ratings has implications for therapy aims, prognosis, patient expectations and outcomes. Where disagreement persists, the clinician may have to determine whether therapy priorities need redesigning to reflect patients’ perceived needs, or to evaluate whether patient perceptions and expectations are unrealistic.

44 citations


Network Information
Related Topics (5)
Rehabilitation
46.2K papers, 776.3K citations
69% related
Ankle
30.4K papers, 687.4K citations
68% related
Systematic review
33.3K papers, 1.6M citations
68% related
Activities of daily living
18.2K papers, 592.8K citations
68% related
Validity
13.8K papers, 776K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
202278
202186
202083
201986
201867