Analyzing and Interpreting Data From Likert-Type Scales
Reads0
Chats0
TLDR
Reading this article is to provide readers who do not have extensive statistics background with the basics needed to understand the basics of Likert scale concepts.Abstract:
Likert-type scales are frequently used in medical education and medical education research. Common uses include end-of-rotation trainee feedback, faculty evaluations of trainees, and assessment of performance after an educational intervention. A sizable percentage of the educational research manuscripts submitted to the Journal of Graduate Medical Education employ a Likert scale for part or all of the outcome assessments. Thus, understanding the interpretation and analysis of data derived from Likert scales is imperative for those working in medical education and education research. The goal of this article is to provide readers who do not have extensive statistics background with the basics needed to understand these concepts.
Developed in 1932 by Rensis Likert1 to measure attitudes, the typical Likert scale is a 5- or 7-point ordinal scale used by respondents to rate the degree to which they agree or disagree with a statement (table). In an ordinal scale, responses can be rated or ranked, but the distance between responses is not measurable. Thus, the differences between “always,” “often,” and “sometimes” on a frequency response Likert scale are not necessarily equal. In other words, one cannot assume that the difference between responses is equidistant even though the numbers assigned to those responses are. This is in contrast to interval data, in which the difference between responses can be calculated and the numbers do refer to a measureable “something.” An example of interval data would be numbers of procedures done per resident: a score of 3 means the resident has conducted 3 procedures. Interestingly, with computer technology, survey designers can create continuous measure scales that do provide interval responses as an alternative to a Likert scale. The various continuous measures for pain are well-known examples of this (figure 1).
FIGURE 1
Continuous Measure Example
TABLE
Typical Likert Scalesread more
Citations
More filters
Journal ArticleDOI
Preconception Perceived Stress Is Associated with Reproductive Hormone Levels and Longer Time to Pregnancy.
Karen C. Schliep,Sunni L. Mumford,Robert M. Silver,Brian D. Wilcox,Rose G. Radin,Neil J. Perkins,Noya Galai,Jihye Park,Keewan Kim,Lindsey A. Sjaarda,Torie C. Plowden,Enrique F. Schisterman +11 more
TL;DR: Preconception perceived stress appears to adversely affect sex steroid synthesis and time to pregnancy, and mechanisms likely include the effects of stress on ovulatory function, but additional mechanisms, potentially during implantation, may also exist.
Journal ArticleDOI
Predictors of Care Gaps in Home Dialysis: The Home Dialysis Virtual Ward Study
Annie-Claire Nadeau-Fredette,Christopher T. Chan,Joanne M. Bargman,Michael Copland,S Neil Finkle,Matthew J. Oliver,Robert P. Pauly,Jeffrey Perl,Nikhil Shah,Deborah Zimmerman,Karthik K. Tennankore +10 more
TL;DR: The HDVW was effective at identifying several potential care gaps, and patients were satisfied across several domains of care, and may be valuable in supporting home dialysis patients during care transitions.
Proceedings ArticleDOI
Have a SEAT on Stage: Restoring Trust with Spectator Experience Augmentation Techniques
TL;DR: Despite contradictory results on comprehension tasks, it is shown that contrary to pre-performance explanations, visual augmentations improve the audience experience, increase their subjective comprehension and restore the trust in performers by reversing the doubt in their favour.
Journal ArticleDOI
Innovations in pre-doctoral dental education: Influencing attitudes and opinions about patients with substance use disorder.
Folarin Odusola,Jennifer L. Smith,Adam Bisaga,John T. Grbic,James B. Fine,Kelly E. Granger,Mei-Chen Hu,Frances R. Levin +7 more
TL;DR: The SBIRT training improved DDS1 attitudes and opinions toward patients with SUD with respect to all AOS questions and is a suitable model for other dental schools.
Journal ArticleDOI
Towards a Platform for Robot-Assisted Minimally-Supervised Therapy of Hand Function: Design and Pilot Usability Evaluation
Raffaele Ranzani,Lucas Eicher,Federica Viggiano,Bernadette Engelbrecht,Jeremia P. O. Held,Olivier Lambercy,Roger Gassert +6 more
TL;DR: In this article, a hand rehabilitation robot with minimal therapist supervision was evaluated for real-world application, and the usability of the platform was evaluated with a usability evaluation and perceived workload.
References
More filters
Book
A technique for the measurement of attitudes
TL;DR: The instrument to be described here is not, however, indirect in the usual sense of the word; it does not seek responses to items apparently unrelated to the attitudes investigated, and seeks to measure prejudice in a manner less direct than is true of the usual prejudice scale.
Journal ArticleDOI
Likert scales, levels of measurement and the "laws" of statistics.
TL;DR: It is shown that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions, and parametric methods can be utilized without concern for “getting the wrong answer”.
Journal ArticleDOI
Likert scales: how to (ab)use them
TL;DR: I have recently used Likert-type rating scales to measure student views on various educational interventions, providing a range of responses to a given question or statement.
Journal ArticleDOI
Resolving the 50-year debate around using and misusing Likert scales.
James Carifio,Rocco J. Perla +1 more
TL;DR: Most recently in this journal, Jamieson outlined the view that ‘Likert scales’ are ordinal in character and that they must be analysed using non-parametric statistics, which are less sensitive and less powerful than parametric statistics and are more likely to miss weaker or emerging findings.
Journal ArticleDOI
You Can't Fix by Analysis What You've Spoiled by Design: Developing Survey Instruments and Collecting Validity Evidence.
TL;DR: The aim of the present editorial is to outline a systematic process for developing and collecting reliability and validity evidence for survey instruments used in GME and GME research.