scispace - formally typeset
Search or ask a question
Topic

Intra-rater reliability

About: Intra-rater reliability is a research topic. Over the lifetime, 2073 publications have been published within this topic receiving 140968 citations.


Papers
More filters
01 Jan 2014
TL;DR: A preliminary assessment of the inter-analyst reliability of the RIAAT procedure, which is a recent tool developed for the Recording, Investigation and Analysis of Accidents at Work, shows a low inter-analysts reliability for all variables tested.
Abstract: This paper presents a preliminary assessment of the inter-analyst reliability of the RIAAT procedure, which is a recent tool developed for the Recording, Investigation and Analysis of Accidents at Work. The study involved 5 analysts who applied the RIAAT procedure to a set of 11 accidents at work occurred in a Portuguese company. The study focused on 8 nominal variables considered as key-variables within RIAAT. Reliability was measured with three coefficients for calculating the level of agreement between analysts. Overall, the results showed a low inter-analysts reliability for all variables tested; the first 4 (Eurostat variables included in RIAAT) held a reliability level almost acceptable. The remaining 4 variables (RIAAT-specific) led to lower agreements; this may be explained by their use in the analysis and coding of more “distant” causal factors. The authors argue for the benefit of having a well-trained team of investigators rather than one person, as it enhances reliability of information. on the most appropriate to use (Craggs & Wood 2005, Hayes & Krippendorff 2007). Potter & LevineDonnerstein (1999) discuss the differences between four of the most popular coefficients: percent agreement (%-agreement), Scott’s π, Cohen’s κ and Krippendorff’s α. They encourage the use of coefficients that take into account the agreement that would be obtained by chance (i.e. chance-corrected). This study used three reliability coefficients, namely, the %-agreement, Scott’s π and Krippendorff’s α, following the suggestion of using more than one coefficient (Lombard et al 2002, Taylor & Watkinson 2007). Unlike π and α, the %-agreement is not chance-corrected but it was included because it is easy to calculate. The aim of this work was to make a preliminary inter-analysts reliability assessment of an investigation tool; the study object was the RIAAT process (Recording, Investigation and Analysis of Accidents at Work), in terms of its analytical procedure and embedded coding system (Jacinto et al 2010a, 2011). This process covers the complete cycle of accident information. The motivation lays on the fact that RIAAT is a recent approach, which still needs validation in terms of usability and reliability.

1 citations

Journal ArticleDOI
TL;DR: The relationship between problem frequency and severity has been the subject of an ongoing discussion in the usability literature as discussed by the authors and there is conflicting evidence as to whether more severe problems a more severe problem a...
Abstract: The relationship between problem frequency and severity has been the subject of an ongoing discussion in the usability literature. There is conflicting evidence as to whether more severe problems a...

1 citations

Journal ArticleDOI
TL;DR: In this paper , the GPPGA score was validated using reproducibility of scores between assessors (interrater) and by the same assessor over time (intrarater).
Abstract: The GPPGA, a novel clinical endpoint for assessing GPP severity, scores pustulation, scaling, and erythema from 0–4. To validate GPPGA as a reliable and consistent measure, reproducibility of scores was evaluated between assessors (interrater) and by the same assessor over time (intrarater). A panel of GPP clinical leaders selected 16 images representing all GPP severities. A cohort (N = 26) of GPP-experienced dermatologists and 3 additional ‘expert raters’ scored GPPGA components during 2 online sessions, separated by ≥10 days. Interrater reliability was assessed by intraclass correlation coefficient (ICC) using an absolute agreement, 2-way mixed-effects model. Intrarater reliability was assessed by ICC using an absolute agreement 2-way random-effects model. ICC thresholds were defined as <0.40, poor; 0.40–0.59, fair; 0.60–0.74, good; 0.75–1.00, excellent. Of 26 dermatologists, 20 completed both assessments, as did all expert raters. Intrarater reliability for dermatologists was excellent [median (range) ICCs: pustulation, 0.92 (0.75,1); erythema, 0.91 (0.76,1); scaling, 0.87 (0.66,1)], indicating consistent absolute agreement of severity over time, excepting 1 dermatologist who recorded a scaling ICC of 0.66 (95% CI: 0.29, 0.85; ‘good’). Expert raters were within the excellent threshold (ICC ≥0.81). For interrater reliability, absolute agreement among the 26 dermatologists was excellent in all categories and for expert raters in erythema, scaling, and total score; the pustule score ICC was 0.69 (‘good’). The results of the inter- and intrarater assessments demonstrated a high level of reliability for scoring of GPPGA by physicians globally; dermatologists were consistent over time with individual assessments of disease severity.

1 citations

Journal ArticleDOI
TL;DR: In this article, the DYANE model was used to analyze the reliability growth trend of the ignition system and the reliability index of powerplant was allocated with the score allocation method.
Abstract: The reliability index of powerplant was allocated with the score allocation method. Based on the unit reliability growth test, the DYANE model was used to analyze the reliability growth trend of the ignition system. With the dual ignition system, the reliability of the ignition portion and the safe reliability of the powerplant were improved.

Network Information
Related Topics (5)
Rehabilitation
46.2K papers, 776.3K citations
69% related
Ankle
30.4K papers, 687.4K citations
68% related
Systematic review
33.3K papers, 1.6M citations
68% related
Activities of daily living
18.2K papers, 592.8K citations
68% related
Validity
13.8K papers, 776K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202342
202278
202186
202083
201986
201867