scispace - formally typeset
Open AccessJournal ArticleDOI

The adequacy of response rates to online and paper surveys: what can be done?

TLDR
Suggestions for improving the effectiveness of evaluation strategy are to seek to obtain the highest response rates possible to all surveys; to take account of probable effects of survey design and methods on the feedback obtained when interpreting that feedback; and to enhance this action by making use of data derived from multiple methods of gathering feedback.
Abstract
This article is about differences between, and the adequacy of, response rates to online and paper‐based course and teaching evaluation surveys. Its aim is to provide practical guidance on these matters. The first part of the article gives an overview of online surveying in general, a review of data relating to survey response rates and practical advice to help boost response rates. The second part of the article discusses when a response rate may be considered large enough for the survey data to provide adequate evidence for accountability and improvement purposes. The article ends with suggestions for improving the effectiveness of evaluation strategy. These suggestions are: to seek to obtain the highest response rates possible to all surveys; to take account of probable effects of survey design and methods on the feedback obtained when interpreting that feedback; and to enhance this action by making use of data derived from multiple methods of gathering feedback.

read more

Content maybe subject to copyright    Report

The adequacy of response rates to online and paper surveys:
what can be done?
Author
Nulty, Duncan
Published
2008
Journal Title
Assessment & Evaluation in Higher Education
DOI
https://doi.org/10.1080/02602930701293231
Copyright Statement
© 2008 Taylor & Francis. This is the author-manuscript version of the paper. Reproduced in
accordance with the copyright policy of the publisher.Please refer to the journal link for access
to the definitive, published version.
Downloaded from
http://hdl.handle.net/10072/26182
Link to published version
http://www.tandf.co.uk/journals/titles/02602938.asp
Griffith Research Online
https://research-repository.griffith.edu.au

The adequacy of response rates to on-line and paper surveys:
What can be done?
Duncan D. Nulty
Griffith Institute for Higher Education, Griffith University.
This is a Pre-print of an article submitted for consideration in the Assessment and
Evaluation in Higher Education (2008) (copyright Taylor and Francis). Assessment
and Evaluation in Higher Education is available online at:
http://journalsonline.tandf.co.uk
Final version in press 2008
: Assessment and Evaluation in Higher Education.
Expected publication to be Volume 33 Number 3 (June 2008).
Abstract
This article is about differences between, and the adequacy of, response rates to on-line
and paper-based course and teaching evaluation surveys. Its aim is to provide practical
guidance on these matters.
The first part of the article gives an overview of on-line surveying in general, a review of
data relating to survey response rates and, practical advice to help boost response
rates. The second part of the article discusses when a response rate may be considered
big enough for the survey data to provide adequate evidence for accountability and
improvement purposes. The article ends with suggestions for improving the
effectiveness of evaluation strategy. These suggestions are: to seek to obtain the
highest response rates possible to all surveys; to take account of probable effects of
survey design and methods on the feedback obtained when interpreting that feedback;
1

and, to enhance this action by making use of data derived from multiple methods of
gathering feedback.
On-line surveying in general
There are many advantages associated with the use of information technology to
support approaches to evaluation (Dommeyer, Baum, Hanna, & Chapman, 2004;
Salmon, Deasy, & Garrigan, 2004; Watt, Simpson, McKillop, & Nunn, 2002). As
examples, Watt et al. (2002) note that "using web-based evaluation questionnaires can
bypass many of the bottlenecks in the evaluation system (e.g. data entry and
administration) and move to a more 'just in time' evaluation model." (p.327). Another
advantage is avoiding the need to administer surveys in-class (Dommeyer, Baum,
Hanna, & Chapman, 2004). Unsurprisingly, there is an increasing growth in the use of
web-based surveying for course and teaching evaluation (Hastie & Palmer, 1997; Seal
& Przasnyski, 2001). This growth is happening despite concerns from students (e.g.
regarding confidentiality and ease of use) (Dommeyer, Baum, & Hanna, 2002), and
concerns from staff (e.g. about the adequacy of response rates) (Dommeyer, Baum,
Chapman, & Hanna, 2002).
On-line surveying practice varies greatly. For example, in Australia, the University of
South Australia uses a system supporting solely on-line administration of surveys, while
Murdoch University and Curtin University among others are moving the same way.
Griffith University and Queensland University of Technology have each developed
2

integrated web-based systems that take a hybrid approach offering academics a choice
of paper or on-line administration for their surveys. Respondents, however, have no
choice: they either receive a paper-based survey or an on-line survey. Other emerging
systems allow choice of response mode by combining multiple modes of administration
and response (Pearson Assessments, 2006), thereby allowing survey designers to
better match the method of survey administration to the needs, abilities or preferences
of respondents and avoid skewing the data.
Despite these variations, there are some common features to on-line surveying practice.
These have been described by (Dommeyer, Baum, Hanna, & Chapman, 2004). They
reported that: a typical online evaluation involves: giving students assurances that their
responses will be de-identified and that aggregate reports will be made available only
after the final grades are determined; providing students URL to access to the survey -
generally using their student ID number; students responding numerically to multiple
response items and typing answers to open-ended questions; providing students with a
receipt verifying that they have completed the evaluation; and, providing at least 2
weeks in which the students can respond, usually near the end of term/semester.
(p.612)
Comparability of on-line and on-paper survey response-rate data
McCormack (2003) reported that there are "new expectations in relation to the
evaluation of teaching, for example, expectations about the role of evaluation of
3

teaching in promotion and probation and about the public availability of student
evaluation results on institution web sites ...". More specifically, the expectations are that
teaching evaluations should be used directly, openly and compulsorily in promotion and
probation decisions, and that data on student evaluation of courses should be made
available publicly to inform the public. Such expectations may be seen as an extension
of the change in the focus of teaching and course evaluations from formative to
summative (Ballantyne, 2003).
These changes in expectations and focus are occurring at the same time that the use of
on-line surveying is increasing. Considered together, this has raised interest in issues
around response rates to these surveys. Yet, a recent review of literature regarding
instruments for obtaining student feedback (Richardson, 2005) claimed that "little is
known about the response rates obtained in electronic surveys, or whether different
modes of administration yield similar patterns of results" (p.406).
Closer scrutiny of the literature however, reveals that a good deal is known. Moreover,
there is also a fair amount of information available in relation to the comparison between
patterns of results obtained through using different modes of administration of surveys.
Some of that literature is reviewed below – with the caveat that while it is strongly
suggestive of what one might call a "prevailing position", it also illustrates substantial
variability.
4

Citations
More filters
Journal Article

Research methods in social relations

A. R. Ilersic
- 01 Jan 1961 - 
TL;DR: This sales letter may not influence you to be smarter, but the book that this research methods in social relations will evoke you to being smarter.
Journal ArticleDOI

On the Validity of Student Evaluation of Teaching: The State of the Art

TL;DR: The authors provided an extensive overview of the recent literature on student evaluation of teaching (SET) in higher education, based on the SET meta-validation model, drawing upon research reports published in peer-reviewed journals since 2000.
Journal ArticleDOI

Increased generalized anxiety, depression and distress during the COVID-19 pandemic: a cross-sectional study in Germany.

TL;DR: Assessment of initial data on the mental health burden of the German public during the COVID-19 pandemic found the provision of appropriate psychological interventions for those in need and the Provision of transparency and comprehensible information are crucial during the current pandemic.
Journal ArticleDOI

Facebook and the others. Potentials and obstacles of Social Media for teaching in higher education

TL;DR: The results show that Social Media use is still rather limited and restricted and that academics are not much inclined to integrate these devices into their practices for several reasons, such as cultural resistance, pedagogical issues or institutional constraints.
Journal ArticleDOI

Sustainable supply chain management in emerging economies: Trade-offs between environmental and cost performance

TL;DR: In this paper, an integrated SSCM performance framework was developed and empirically assessed by the resource dependence theory (RDT) lens, linking SSCMC practices and their relationship with organizational performance.
References
More filters
Book

Mail and internet surveys : the tailored design method

TL;DR: In this paper, the authors present an overview of the design of web, mail, and mixed-mode surveys, and present a survey implementation approach for web-based and mail-based surveys.
Journal ArticleDOI

A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys

TL;DR: In this article, a meta-analysis explores factors associated with higher response rates in electronic surveys reported in both published and unpublished research and concludes that response representativeness is more important than response rate in survey research.
Journal Article

Research methods in social relations

A. R. Ilersic
- 01 Jan 1961 - 
TL;DR: This sales letter may not influence you to be smarter, but the book that this research methods in social relations will evoke you to being smarter.
Book

Survey Research Methods

Earl Babbie
TL;DR: The Logic of Survey Sampling and the Logic of Probability Sampling, a comparison of Survey and Other Methods, and the Ethics of Survey Research, a review of social science research in the 21st Century.
Related Papers (5)
Frequently Asked Questions (6)
Q1. What are the contributions in "The adequacy of response rates to online and paper surveys: what can be done? author" ?

This article is about differences between, and the adequacy of, response rates to on-line and paper-based course and teaching evaluation surveys. The first part of the article gives an overview of on-line surveying in general, a review of data relating to survey response rates and, practical advice to help boost response rates. The second part of the article discusses when a response rate may be considered big enough for the survey data to provide adequate evidence for accountability and improvement purposes. The article ends with suggestions for improving the effectiveness of evaluation strategy. These suggestions are: to seek to obtain the highest response rates possible to all surveys ; to take account of probable effects of survey design and methods on the feedback obtained when interpreting that feedback ; 

Given the anonymity of responses and the impossibility of using demographic data to predict attitudinal variables in students ( and therefore there being no viable way to systematically target surveys at a minimal sample of students which would be representative of the whole group ). 

The most prevalent methods for boosting on-line survey response rates are:1. repeat reminder emails to non-respondents (students) 2. repeat reminder emails to survey owners (academics) 3. incentives to students in the form of prizes for respondents awarded through alottery . 

Watt et al's (2002) research suggests that when paper surveys of courses and teaching are not administered face-to-face, the response rates might be as low as for non-faceto-face on-line surveys. 

It seems likely that this is because one of the main benefits (and uses) of the on-line survey process is to avoid the need to conduct the survey in class (Dommeyer, Baum, Hanna, & Chapman, 2004). 

The result will be a survey with a low overall response rate, made up of students who are mostly familiar with, able to use, and favourably disposed toward on-line teaching and learning provisions of the course.