scispace - formally typeset
Search or ask a question
Author

Sara A. Sparrow

Bio: Sara A. Sparrow is an academic researcher. The author has contributed to research in topics: Adaptive behavior. The author has an hindex of 1, co-authored 1 publications receiving 1829 citations.

Papers
More filters
Journal Article
TL;DR: A set of criteria based upon biostatistical considerations for determining the interrater reliability of specific adaptive behavior items in a given setting was presented and guidelines for differentiating type of adaptive behavior that are statistically reliable from those that are reliable in a clinical or practical sense were delineated.
Abstract: A set of criteria based upon biostatistical considerations for determining the interrater reliability of specific adaptive behavior items in a given setting was presented. The advantages and limitations of extant statistical assessment procedures were discussed. Also, a set of guidelines for differentiating type of adaptive behavior that are statistically reliable from those that are reliable in a clinical or practical sense was delineated. Data sets were presented throughout in order to illustrate the advantages of recommended statistical procedures over other available ones.

2,017 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors provide guidelines, guidelines, and simple rules of thumb to assist the clinician faced with the challenge of choosing an appropriate test instrument for a given psychological assessment.
Abstract: In the context of the development of prototypic assessment instruments in the areas of cognition, personality, and adaptive functioning, the issues of standardization, norming procedures, and the important psychometrics of test reliability and validity are evaluated critically. Criteria, guidelines, and simple rules of thumb are provided to assist the clinician faced with the challenge of choosing an appropriate test instrument for a given psychological assessment. Clinicians are often faced with the critical challenge of choosing the most appropriate available test instrument for a given psychological assessment of a child, adolescent, or adult of a particular age, gender, and class of disability. It is the purpose of this report to provide some criteria, guidelines, or simple rules of thumb to aid in this complex scientific decision. As such, it draws upon my experience with issues of test development, standardization, norming procedures, and important psychometrics, namely, test reliability and validity. As I and my colleagues noted in an earlier publication, the major areas of psychological functioning, in the normal development of infants, children, adolescents, adults, and elderly people, include cognitive, academic, personality, and adaptive behaviors (Sparrow, Fletcher, & Cicchetti, 1985). As such, the major examples or applications discussed in this article derive primarily, although not exclusively, from these several areas of human functioning.

7,254 citations

Journal ArticleDOI
TL;DR: In this article, the authors compared the CVI to alternative content validity indexes and concluded that the widely-used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, and focus on consensus rather than consistency, and provision of both item and scale information.
Abstract: Nurse researchers typically provide evidence of content validity for instruments by computing a content validity index (CVI), based on experts' ratings of item relevance. We compared the CVI to alternative indexes and concluded that the widely-used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, focus on consensus rather than consistency, and provision of both item and scale information. One weakness is its failure to adjust for chance agreement. We solved this by translating item-level CVIs (I-CVIs) into values of a modified kappa statistic. Our translation suggests that items with an I-CVI of .78 or higher for three or more experts could be considered evidence of good content validity.

2,404 citations

01 Jan 2007
TL;DR: This work translates item-level CVIs (I-CVIs) into values of a modified kappa statistic and suggests that items with an I-CVI of .78 or higher for three or more experts could be considered evidence of good content validity.
Abstract: Nurse researchers typically provide evidence of content validity for instruments by computing a content validity index (CVI), based on experts' ratings of item relevance. We compared the CVI to alternative indexes and concluded that the widely-used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, focus on consensus rather than consistency, and provision of both item and scale information. One weakness is its failure to adjust for chance agreement. We solved this by translating item-level CVIs (I-CVIs) into values of a modified kappa statistic. Our translation suggests that items with an I-CVI of .78 or higher for three or more experts could be considered evidence of good content validity. 2007 Wiley Periodicals, Inc. Res Nurs Health 30:459-467, 2007

2,250 citations

Journal ArticleDOI
TL;DR: A novel method of quantifying atypical strategies of social monitoring in a setting that simulates the demands of daily experience is reported, finding that fixation times on mouths and objects but not on eyes are strong predictors of degree of social competence.
Abstract: Background: Manifestations of core social deficits in autism are more pronounced in everyday settings than in explicit experimental tasks. To bring experimental measures in line with clinical observation, we report a novel method of quantifying atypical strategies of social monitoring in a setting that simulates the demands of daily experience. Enhanced ecological validity was intended to maximize between-group effect sizes and assess the predictive utility of experimental variables relative to outcome measures of social competence. Methods: While viewing social scenes, eye-tracking technology measured visual fixations in 15 cognitively able males with autism and 15 age-, sex-, and verbal IQ– matched control subjects. We reliably coded fixations on 4 regions: mouth, eyes, body, and objects. Statistical analyses compared fixation time on regions of interest between groups and correlation of fixation time with outcome measures of social competence (ie, standardized measures of daily social adjustment and degree of autistic social symptoms). Results: Significant between-group differences were obtained for all 4 regions. The best predictor of autism was reduced eye region fixation time. Fixation on mouths and objects was significantly correlated with social functioning: increased focus on mouths predicted improved social adjustment and less autistic social impairment, whereas more time on objects predicted the opposite relationship. Conclusions: When viewing naturalistic social situations, individuals with autism demonstrate abnormal patterns of social visual pursuit consistent with reduced salience of eyes and increased salience of mouths, bodies, and objects. Fixation times on mouths and objects but not on eyes are strong predictors of degree of social competence. Arch Gen Psychiatry. 2002;59:809-816

1,893 citations

Journal ArticleDOI
TL;DR: Clinical guidelines for the diagnosis of autism in the draft version of ICD-10 were operationalized in terms of abnormalities on specific ADOS items, and an algorithm based on these items was shown to have high reliability and discriminant validity.
Abstract: The Autism Diagnostic Observation Schedule (ADOS), a standardized protocol for observation of social and communicative behavior associated with autism, is described. The instrument consists of a series of structured and semistructured presses for interaction, accompanied by coding of specific target behaviors associated with particular tasks and by general ratings of the quality of behaviors. Interrater reliability for five raters exceeded weighted kappas of .55 for each item and each pair of raters for matched samples of 15 to 40 autistic and nonautistic, mildly mentally handicapped children (M IQ = 59) between the ages of 6 and 18 years. Test-retest reliability was adequate. Further analyses compared these groups to two additional samples of autistic and nonautistic subjects with normal intelligence (M IQ = 95), matched for sex and chronological age. Analyses yielded clear diagnostic differences in general ratings of social behavior, specific aspects of communication, and restricted or stereotypic behaviors and interests. Clinical guidelines for the diagnosis of autism in the draft version of ICD-10 were operationalized in terms of abnormalities on specific ADOS items. An algorithm based on these items was shown to have high reliability and discriminant validity.

1,758 citations