scispace - formally typeset
Search or ask a question
Topic

Content validity

About: Content validity is a research topic. Over the lifetime, 8843 publications have been published within this topic receiving 308254 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors developed and validated new scales for two specific variables, perceived usefulness and perceived ease of use, which are hypothesized to be fundamental determinants of user acceptance.
Abstract: Valid measurement scales for predicting user acceptance of computers are in short supply. Most subjective measures used in practice are unvalidated, and their relationship to system usage is unknown. The present research develops and validates new scales for two specific variables, perceived usefulness and perceived ease of use, which are hypothesized to be fundamental determinants of user acceptance. Definitions of these two variables were used to develop scale items that were pretested for content validity and then tested for reliability and construct validity in two studies involving a total of 152 users and four application programs. The measures were refined and streamlined, resulting in two six-item scales with reliabilities of .98 for usefulness and .94 for ease of use. The scales exhibited hgih convergent, discriminant, and factorial validity. Perceived usefulness was significnatly correlated with both self-reported current usage r = .63, Study 1) and self-predicted future usage r = .85, Study 2). Perceived ease of use was also significantly correlated with current usage r = .45, Study 1) and future usage r = .59, Study 2). In both studies, usefulness had a signficnatly greater correaltion with usage behavior than did ease of use. Regression analyses suggest that perceived ease of use may actually be a causal antecdent to perceived usefulness, as opposed to a parallel, direct determinant of system usage. Implications are drawn for future research on user acceptance.

40,720 citations

Book
01 Jan 1991
TL;DR: In this paper, the authors discuss the role of measurement in the social sciences and propose guidelines for scale development in the context of scale-based measurement. But, the authors do not discuss the relationship between scale scores and scale length.
Abstract: Chapter 1: Overview General Perspectives on Measurement Historical Origins of Measurement in Social Science Later Developments in Measurement The Role of Measurement in the Social Sciences Summary and Preview Chapter 2: Understanding the "Latent Variable" Constructs Versus Measures Latent Variable as the Presumed Cause of Item Values Path Diagrams Further Elaboration of the Measurement Model Parallel "Tests" Alternative Models Exercises Chapter 3: Reliability Continuous Versus Dichotomous Items Internal Consistency Relability Based on Correlations Between Scale Scores Generalizability Theory Summary and Exercises Chapter 4: Validity Content Validity Criterion-related Validity Construct Validity What About Face Validity? Exercises Chapter 5: Guidelines in Scale Development Step 1: Determine Clearly What it Is You Want to Measure Step 2: Generate an Item Pool Step 3: Determine the Format for Measurement Step 4: Have Initial Item Pool Reviewed by Experts Step 5: Consider Inclusion of Validation Items Step 6: Administer Items to a Development Sample Step 7: Evaluate the Items Step 8: Optimize Scale Length Exercises Chapter 6: Factor Analysis Overview of Factor Analysis Conceptual Description of Factor Analysis Interpreting Factors Principal Components vs Common Factors Confirmatory Factor Analysis Using Factor Analysis in Scale Development Sample Size Conclusion Chapter 7: An Overview of Item Response Theory Item Difficulty Item Discrimination False Positives Item Characteristic Curves Complexities of IRT When to Use IRT Conclusions Chapter 8: Measurement in the Broader Research Context Before the Scale Development After the Scale Administration Final Thoughts References Index About the Author

11,710 citations

Book
05 Jun 1991
TL;DR: Measurement in the Broader Research Context Before the Scale Development After the Scale Administration Final Thoughts References Index about the Author.
Abstract: Chapter 1: Overview General Perspectives on Measurement Historical Origins of Measurement in Social Science Later Developments in Measurement The Role of Measurement in the Social Sciences Summary and Preview Chapter 2: Understanding the "Latent Variable" Constructs Versus Measures Latent Variable as the Presumed Cause of Item Values Path Diagrams Further Elaboration of the Measurement Model Parallel "Tests" Alternative Models Exercises Chapter 3: Reliability Continuous Versus Dichotomous Items Internal Consistency Relability Based on Correlations Between Scale Scores Generalizability Theory Summary and Exercises Chapter 4: Validity Content Validity Criterion-related Validity Construct Validity What About Face Validity? Exercises Chapter 5: Guidelines in Scale Development Step 1: Determine Clearly What it Is You Want to Measure Step 2: Generate an Item Pool Step 3: Determine the Format for Measurement Step 4: Have Initial Item Pool Reviewed by Experts Step 5: Consider Inclusion of Validation Items Step 6: Administer Items to a Development Sample Step 7: Evaluate the Items Step 8: Optimize Scale Length Exercises Chapter 6: Factor Analysis Overview of Factor Analysis Conceptual Description of Factor Analysis Interpreting Factors Principal Components vs Common Factors Confirmatory Factor Analysis Using Factor Analysis in Scale Development Sample Size Conclusion Chapter 7: An Overview of Item Response Theory Item Difficulty Item Discrimination False Positives Item Characteristic Curves Complexities of IRT When to Use IRT Conclusions Chapter 8: Measurement in the Broader Research Context Before the Scale Development After the Scale Administration Final Thoughts References Index About the Author

10,722 citations

Journal ArticleDOI
TL;DR: The criteria can be used in systematic reviews of health status questionnaires, to detect shortcomings and gaps in knowledge of measurement properties, and to design validation studies.

7,439 citations

Journal ArticleDOI
TL;DR: It is shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies and it is possible to produce a Checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses.
Abstract: OBJECTIVE: To test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity. DESIGN: A pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden. MAIN RESULTS: The performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes). CONCLUSIONS: This study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity.

6,849 citations


Network Information
Related Topics (5)
Qualitative research
39.9K papers, 2.3M citations
89% related
Psychological intervention
82.6K papers, 2.6M citations
86% related
Anxiety
141.1K papers, 4.7M citations
81% related
Health care
342.1K papers, 7.2M citations
80% related
Randomized controlled trial
119.8K papers, 4.8M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023572
20221,117
2021937
2020907
2019681
2018598