scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)

29 Sep 2004-Journal of Medical Internet Research (JMIR Publications Inc., Toronto, Canada)-Vol. 6, Iss: 3
TL;DR: A checklist of recommendations for authors is being presented by theJMIR in an effort to ensure complete descriptions of Web-based surveys and it is hoped that author adherence to the checklist will increase the usefulness of such reports.
Abstract: An error in the CHERRIES statement has been corrected (J Med Internet Res 2004;6[3]:e34). In the original paper, in table 1, denominator and numerator were flipped in the recommendations on how response rates (view rate, participation rate, and completion rate) should be calculated. The view rate should be the ratio of unique survey visitors divided by unique site visitors. The participation rate should be the ratio of those who agreed to participate divided by unique first survey page visitors. The completion rate is the ratio of the number of people who finished the survey divided by those who agreed to participate. The corrections have been made in the table in both columns. [J Med Internet Res 2012;14(1):e8]
Citations
More filters
Journal ArticleDOI
TL;DR: A checklist instrument that has the potential to improve reporting and provides a basis for evaluating the validity and applicability of eHealth trials is developed.
Abstract: Background: Web-based and mobile health interventions (also called “Internet interventions” or "eHealth/mHealth interventions") are tools or treatments, typically behaviorally based, that are operationalized and transformed for delivery via the Internet or mobile platforms. These include electronic tools for patients, informal caregivers, healthy consumers, and health care providers. The Consolidated Standards of Reporting Trials (CONSORT) statement was developed to improve the suboptimal reporting of randomized controlled trials (RCTs). While the CONSORT statement can be applied to provide broad guidance on how eHealth and mHealth trials should be reported, RCTs of web-based interventions pose very specific issues and challenges, in particular related to reporting sufficient details of the intervention to allow replication and theory-building. Objective: To develop a checklist, dubbed CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile HEalth Applications and onLine TeleHealth), as an extension of the CONSORT statement that provides guidance for authors of eHealth and mHealth interventions. Methods: A literature review was conducted, followed by a survey among eHealth experts and a workshop. Results: A checklist instrument was constructed as an extension of the CONSORT statement. The instrument has been adopted by the Journal of Medical Internet Research (JMIR) and authors of eHealth RCTs are required to submit an electronic checklist explaining how they addressed each subitem. Conclusions: CONSORT-EHEALTH has the potential to improve reporting and provides a basis for evaluating the validity and applicability of eHealth trials. Subitems describing how the intervention should be reported can also be used for non-RCT evaluation reports. As part of the development process, an evaluation component is essential; therefore, feedback from authors will be solicited, and a before-after study will evaluate whether reporting has been improved.

1,242 citations


Cites methods from "Improving the Quality of Web Survey..."

  • ...While at JMIR we are requiring authors to submit the CONSORT (Consolidated Standards of Reporting Trials) checklist [7-11] and use additional checklists for some aspects of these trials (eg, Checklist for Reporting Results of Internet E-Surveys [CHERRIES] [12]), internationally developed and adopted reporting guidelines specifically for eHealth and mHealth trials are lacking....

    [...]

  • ...Our previously published CHERRIES guideline for reporting web-based surveys [12] may provide additional guidance and may be seen as a supplement to subitem 6a-i, which deals with the common case where outcomes were collected through online questionnaires....

    [...]

Journal ArticleDOI
TL;DR: How TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk is described.
Abstract: In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.

1,241 citations


Cites background from "Improving the Quality of Web Survey..."

  • ...It is typically good practice to report the completion rate (Eysenbach, 2004); however, this information is not available on MTurk....

    [...]

Journal ArticleDOI
TL;DR: The aim of a survey is to gather reliable and unbiased data from a representative sample of respondents to help investigators administer questionnaires to clinicians about clinical practice.
Abstract: Survey research is an important form of scientific inquiry[1][1] that merits rigorous design and analysis.[2][2] The aim of a survey is to gather reliable and unbiased data from a representative sample of respondents.[3][3] Increasingly, investigators administer questionnaires to clinicians about

1,024 citations


Cites background from "Improving the Quality of Web Survey..."

  • ...Although infrequently adopted, several recommendations have been published for reporting findings from postal and electronic surveys.(43) One set of recommended questions to consider when writing a report of findings from postal surveys appears in Table 4....

    [...]

Journal ArticleDOI
TL;DR: The EQUATOR Network as mentioned in this paper is an international initiative that aims to enhance the reliability and value of the published health research literature by providing resources, education and training to facilitate good research reporting and assists in the development, dissemination and implementation of robust reporting guidelines.
Abstract: Growing evidence demonstrates widespread deficiencies in the reporting of health research studies. The EQUATOR Network is an international initiative that aims to enhance the reliability and value of the published health research literature. EQUATOR provides resources, education and training to facilitate good research reporting and assists in the development, dissemination and implementation of robust reporting guidelines. This paper presents a collection of tools and guidelines available on the EQUATOR website (http://www.equator-network.org) that have been developed to increase the accuracy and transparency of health research reporting.

962 citations

Journal ArticleDOI
TL;DR: It is established that members of the PatientsLikeMe community reported a range of benefits, and that these may be related to the extent of site use, and third party validation and longitudinal evaluation is an important next step in continuing to evaluate the potential of online data-sharing platforms.
Abstract: Background: PatientsLikeMe is an online quantitative personal research platform for patients with life-changing illnesses to share their experience using patient-reported outcomes, find other patients like them matched on demographic and clinical characteristics, and learn from the aggregated data reports of others to improve their outcomes. The goal of the website is to help patients answer the question: “Given my status, what is the best outcome I can hope to achieve, and how do I get there?” Objective: Using a cross-sectional online survey, we sought to describe the potential benefits of PatientsLikeMe in terms of treatment decisions, symptom management, clinical management, and outcomes. Methods: Almost 7,000 members from six PatientsLikeMe communities (amyotrophic lateral sclerosis [ALS], Multiple Sclerosis [MS], Parkinson’s Disease, human immunodeficiency virus [HIV], fibromyalgia, and mood disorders) were sent a survey invitation using an internal survey tool (PatientsLikeMe Lens). Results: Complete responses were received from 1323 participants (19% of invited members). Between-group demographics varied according to disease community. Users perceived the greatest benefit in learning about a symptom they had experienced; 72% (952 of 1323) rated the site “moderately” or “very helpful.” Patients also found the site helpful for understanding the side effects of their treatments (n = 757, 57%). Nearly half of patients (n = 559, 42%) agreed that the site had helped them find another patient who had helped them understand what it was like to take a specific treatment for their condition. More patients found the site helpful with decisions to start a medication (n = 496, 37%) than to change a medication (n = 359, 27%), change a dosage (n = 336, 25%), or stop a medication (n = 290, 22%). Almost all participants (n = 1,249, 94%) were diagnosed when they joined the site. Most (n = 824, 62%) experienced no change in their confidence in that diagnosis or had an increased level of confidence (n = 456, 34%). Use of the site was associated with increasing levels of comfort in sharing personal health information among those who had initially been uncomfortable. Overall, 12% of patients (n = 151 of 1320) changed their physician as a result of using the site; this figure was doubled in patients with fibromyalgia (21%, n = 33 of 150). Patients reported community-specific benefits: 41% of HIV patients (n = 72 of 177) agreed they had reduced risky behaviors and 22% of mood disorders patients (n = 31 of 141) agreed they needed less inpatient care as a result of using the site. Analysis of the Web access logs showed that participants who used more features of the site (eg, posted in the online forum) perceived greater benefit. Conclusions: We have established that members of the community reported a range of benefits, and that these may be related to the extent of site use. Third party validation and longitudinal evaluation is an important next step in continuing to evaluate the potential of online data-sharing platforms. [J Med Internet Res 2010;12(2):e19]

577 citations


Cites methods from "Improving the Quality of Web Survey..."

  • ...The following information is provided to comply with the Checklist for Reporting Results of Internet E-Surveys [30]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper concerns the use of the Internet in the research process, from identifying research issues through qualitative research, through using the Web for surveys and clinical trials, to pre-publishing and publishing research results.
Abstract: This paper concerns the use of the Internet in the research process, from identifying research issues through qualitative research, through using the Web for surveys and clinical trials, to pre-publishing and publishing research results. Material published on the Internet may be a valuable resource for researchers desiring to understand people and the social and cultural contexts within which they live outside of experimental settings, with due emphasis on the interpretations, experiences, and views of 'real world' people. Reviews of information posted by consumers on the Internet may help to identify health beliefs, common topics, motives, information, and emotional needs of patients, and point to areas where research is needed. The Internet can further be used for survey research. Internet-based surveys may be conducted by means of interactive interviews or by questionnaires designed for self-completion. Electronic one-to-one interviews can be conducted via e-mail or using chat rooms. Questionnaires can be administered by e-mail (e.g. using mailing lists), by posting to newsgroups, and on the Web using fill-in forms. In "open" web-based surveys, selection bias occurs due to the non-representative nature of the Internet population, and (more importantly) through self-selection of participants, i.e. the non-representative nature of respondents, also called the 'volunteer effect'. A synopsis of important techniques and tips for implementing Web-based surveys is given. Ethical issues involved in any type of online research are discussed. Internet addresses for finding methods and protocols are provided. The Web is also being used to assist in the identification and conduction of clinical trials. For example, the web can be used by researchers doing a systematic review who are looking for unpublished trials. Finally, the web is used for two distinct types of electronic publication. Type 1 publication is unrefereed publication of protocols or work in progress (a 'post-publication' peer review process may take place), whereas Type 2 publication is peer-reviewed and will ordinarily take place in online journals.

623 citations


"Improving the Quality of Web Survey..." refers background in this paper

  • ...As explained in an accompanying editorial [4] as well as in a previous review [5], such surveys can be subject to considerable bias....

    [...]

Journal ArticleDOI
TL;DR: Among a convenience sample recruited via the Internet, results from those randomly assigned to Internet participation were at least as good as, if not better than, among those assigned mailed questionnaires, with less recruitment effort required.
Abstract: BACKGROUND: The use of Internet-based questionnaires for collection of data to evaluate patient education and other interventions has increased in recent years. Many self-report instruments have been validated using paper-and-pencil versions, but we cannot assume that the psychometric properties of an Internet-based version will be identical. OBJECTIVES: To look at similarities and differences between the Internet versions and the paper-and-pencil versions of 16 existing self-report instruments useful in evaluation of patient interventions. METHODS: Participants were recruited via the Internet and volunteered to participate (N=397), after which they were randomly assigned to fill out questionnaires online or via mailed paper-and-pencil versions. The self-report instruments measured were overall health, health distress, practice mental stress management, Health Assessment Questionnaire (HAQ) disability, illness intrusiveness, activity limitations, visual numeric for pain, visual numeric for shortness of breath, visual numeric for fatigue, self-efficacy for managing disease, aerobic exercise, stretching and strengthening exercise, visits to MD, hospitalizations, hospital days, and emergency room visits. Means, ranges, and confidence intervals are given for each instrument within each type of questionnaire. The results from the two questionnaires were compared using both parametric and non-parametric tests. Reliability tests were given for multi-item instruments. A separate sample (N=30) filled out identical questionnaires over the Internet within a few days and correlations were used to assess test-retest reliability. RESULTS: Out of 16 instruments, none showed significant differences when the appropriate tests were used. Construct reliability was similar within each type of questionnaire, and Internet test-retest reliability was high. Internet questionnaires required less follow-up to achieve a slightly (non-significant) higher completion rate compared to mailed questionnaires. CONCLUSIONS: Among a convenience sample recruited via the Internet, results from those randomly assigned to Internet participation were at least as good as, if not better than, among those assigned mailed questionnaires, with less recruitment effort required. The instruments administered via the Internet appear to be reliable, and to be answered similarly to the way they are answered when they are administered via traditional mailed paper questionnaires. [J Med Internet Res 2004;6(3):e29]

478 citations


"Improving the Quality of Web Survey..." refers background in this paper

  • ...In this issue of the Journal of Medical Internet Research we publish two methodological studies exploring the characteristics of Web-based surveys compared to mail-based surveys [1,2]....

    [...]

Journal ArticleDOI
TL;DR: Overall, this survey suggests that patients are deriving considerable benefits from using the Internet and that some of the claimed risks seem to have been exaggerated.
Abstract: BACKGROUND: There have been many studies showing the variable quality of Internet health information and it has often been assumed that patients will blindly follow this and frequently come to harm. There have also been reports of problems for doctors and health services following patient Internet use, but their frequency has not been quantified. However, there have been no large, rigorous surveys of the perceptions of Internet-aware doctors about the actual benefits and harms to their patients of using the Internet. OBJECTIVE: To describe Internet-literate doctors' experiences of their patients' use of the Internet and resulting benefits and problems. METHODS: Online survey to a group of 800 Web-using doctors (members of a UK medical Internet service provider, Medix) in September and October 2001. RESULTS: Responses were received from 748 (94%) doctors, including 375 general practitioners (50%). Respondents estimated that 1%-2% of their patients used the Internet for health information in the past month with no regional variation. Over two thirds of the doctors considered Internet health information to be usually (20%) or sometimes (48%) reliable; this was higher in those recently qualified. Twice as many reported patients experiencing benefits (85%; 95% confidence interval, 80%-90%) than problems (44%; 95% confidence interval, 37%-50%) from the Internet. Patients gaining actual physical benefits from Internet use were reported by 40% of respondents, while 8% reported physical harm. Patients' overall experiences with the Internet were judged excellent 1%, good 29%, neutral 62%, poor 9%, or bad <1%. Turning to the impact of patient Internet use on the doctors themselves, 13% reported no problems, 38% 1 problem, and 49% 2 or more problems. Conversely, 20% reported no benefits for themselves, 49% 1 benefit, and 21% 2 or more benefits. CONCLUSIONS: These doctors reported patient benefits from Internet use much more often than harms, but there were more problems than benefits for the doctors themselves. Reported estimates of patient Internet usage rates were low. Overall, this survey suggests that patients are deriving considerable benefits from using the Internet and that some of the claimed risks seem to have been exaggerated. [J Med Internet Res 2002;4(1):e5]

168 citations


"Improving the Quality of Web Survey..." refers methods in this paper

  • ...In previous issues we have published Web-based research such as a survey among physicians conducted on a Web site [3]....

    [...]

Journal ArticleDOI
TL;DR: The authors' Internet-based survey to surgeons resulted in a significantly lower response rate than a traditional mailed survey, and researchers should not assume that the widespread availability and potential ease of Internet- based surveys will translate into higher response rates.
Abstract: BACKGROUND: Low response rates among surgeons can threaten the validity of surveys. Internet technologies may reduce the time, effort, and financial resources needed to conduct surveys. OBJECTIVE: We investigated whether using Web-based technology could increase the response rates to an international survey. METHODS: We solicited opinions from the 442 surgeon–members of the Orthopaedic Trauma Association regarding the treatment of femoral neck fractures. We developed a self-administered questionnaire after conducting a literature review, focus groups, and key informant interviews, for which we used sampling to redundancy techniques. We administered an Internet version of the questionnaire on a Web site, as well as a paper version, which looked similar to the Internet version and which had identical content. Only those in our sample could access the Web site. We alternately assigned the participants to receive the survey by mail (n=221) or an email invitation to participate on the Internet (n=221). Non-respondents in the mail arm received up to three additional copies of the survey, while non-respondents in the Internet arm received up to three additional requests, including a final mailed copy. All participants in the Internet arm had an opportunity to request an emailed Portable Document Format (PDF) version. RESULTS: The Internet arm demonstrated a lower response rate (99/221, 45%) than the mail questionnaire arm (129/221, 58%) (absolute difference 13%, 95% confidence interval 4%-22%, P<0.01). CONCLUSIONS: Our Internet-based survey to surgeons resulted in a significantly lower response rate than a traditional mailed survey. Researchers should not assume that the widespread availability and potential ease of Internet-based surveys will translate into higher response rates. [J Med Internet Res 2004;6(3):e30]

117 citations


"Improving the Quality of Web Survey..." refers background in this paper

  • ...In this issue of the Journal of Medical Internet Research we publish two methodological studies exploring the characteristics of Web-based surveys compared to mail-based surveys [1,2]....

    [...]

Journal ArticleDOI
TL;DR: It is found that participation was at least as good as if not better among the Web survey group than among those receiving questionnaires by mail, and the responses to 16 health-related questions did not differ significantly between the two study groups.
Abstract: Leece et al used systematic sampling to assign one half of a list of orthopedic surgeons to a Web survey and the other half to a mail survey [1]. They observed that the Web survey produced a significantly lower response rate than the mail survey, and cautioned, “Researchers should not assume that the widespread availability and potential ease of Internet-based surveys will translate into higher response rates.” In contrast, Ritter et al, who recruited participants from the Internet and randomly assigned them either to a mail survey or to a Web survey, observed different results [2]. They found that participation was at least as good as if not better among the Web survey group than among those receiving questionnaires by mail. In addition the investigators found that the responses to 16 health-related questions did not differ significantly between the two study groups.

44 citations


"Improving the Quality of Web Survey..." refers background or methods in this paper

  • ...Statistical methods such as propensity scores may be used to adjust results [4]....

    [...]

  • ...As explained in an accompanying editorial [4] as well as in a previous review [5], such surveys can be subject to considerable bias....

    [...]