Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer.
Godfred O. Boateng,Torsten B. Neilands,Edward A. Frongillo,Hugo Melgar-Quiñonez,Sera L. Young +4 more
Reads0
Chats0
TLDR
This primer will equip both scientists and practitioners to understand the ontology and methodology of scale development and validation, thereby facilitating the advancement of the understanding of a range of health, social, and behavioral outcomes.Abstract:
Scale development and validation are critical to much of the work in the health, social, and behavioral sciences. However, the constellation of techniques required for scale development and evaluation can be onerous, jargon-filled, unfamiliar, and resource-intensive. Further, it is often not a part of graduate training. Therefore, our goal was to concisely review the process of scale development in as straightforward a manner as possible, both to facilitate the development of new, valid, and reliable scales, and to help improve existing ones. To do this, we have created a primer for best practices for scale development in measuring complex phenomena. This is not a systematic review, but rather the amalgamation of technical literature and lessons learned from our experiences spent creating or adapting a number of scales over the past several decades. We identified three phases that span nine steps. In the first phase, items are generated and the validity of their content is assessed. In the second phase, the scale is constructed. Steps in scale construction include pre-testing the questions, administering the survey, reducing the number of items, and understanding how many factors the scale captures. In the third phase, scale evaluation, the number of dimensions is tested, reliability is tested, and validity is assessed. We have also added examples of best practices to each step. In sum, this primer will equip both scientists and practitioners to understand the ontology and methodology of scale development and validation, thereby facilitating the advancement of our understanding of a range of health, social, and behavioral outcomes.read more
Citations
More filters
Handbook Of Psychological Testing
TL;DR: The handbook of psychological testing is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can download it instantly.
References
More filters
Journal ArticleDOI
Cutoff criteria for fit indexes in covariance structure analysis : Conventional criteria versus new alternatives
Li-tze Hu,Peter M. Bentler +1 more
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Journal ArticleDOI
The theory of planned behavior
TL;DR: Ajzen, 1985, 1987, this article reviewed the theory of planned behavior and some unresolved issues and concluded that the theory is well supported by empirical evidence and that intention to perform behaviors of different kinds can be predicted with high accuracy from attitudes toward the behavior, subjective norms, and perceived behavioral control; and these intentions, together with perceptions of behavioral control, account for considerable variance in actual behavior.
Journal ArticleDOI
Coefficient alpha and the internal structure of tests.
TL;DR: In this paper, a general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test, therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test.
Journal ArticleDOI
A Coefficient of agreement for nominal Scales
TL;DR: In this article, the authors present a procedure for having two or more judges independently categorize a sample of units and determine the degree, significance, and significance of the units. But they do not discuss the extent to which these judgments are reproducible, i.e., reliable.
Journal ArticleDOI
Research electronic data capture (REDCap)-A metadata-driven methodology and workflow process for providing translational research informatics support
Paul A. Harris,Robert W. Taylor,Robert J. Thielke,Jonathon Payne,Nathaniel Gonzalez,José G. Conde +5 more
TL;DR: Research electronic data capture (REDCap) is a novel workflow methodology and software solution designed for rapid development and deployment of electronic data Capture tools to support clinical and translational research.