scispace - formally typeset
Search or ask a question
JournalISSN: 2168-0094

Survey practice 

Survey Practice
About: Survey practice is an academic journal published by Survey Practice. The journal publishes majorly in the area(s): Population & Respondent. It has an ISSN identifier of 2168-0094. It is also open access. Over the lifetime, 355 publications have been published receiving 2781 citations.


Papers
More filters
Journal ArticleDOI
Mario Callegaro1
TL;DR: It is important to monitor from which device your online sample is taking the survey, and to consider the consequences the device might have for visual design impact and survey estimates.
Abstract: The type of devices that can be used to go online is becoming more varied. Users access the internet through traditional desktops and laptops, as well as netbooks, tablets, videogame consoles, mobile phones and ebook readers. Because many online surveys are designed to be taken on a standard desktop or laptop screen, it is important to monitor from which device your online sample is taking the survey, and to consider the consequences the device might have for visual design impact and survey estimates. A survey designed to be taken on a desktop does not necessarily or automatically look the same when taken from netbooks, smartphones and other devices.

87 citations

Journal ArticleDOI
TL;DR: Smart Surveys for Smart Phones: Exploring Various Approaches for Conducting Online Mobile Surveys via Smartphones via Smart smartphones finds that using a mobile app to conduct a survey is a viable option.
Abstract: Smart Surveys for Smart Phones: Exploring Various Approaches for Conducting Online Mobile Surveys via Smartphones*

67 citations

Journal ArticleDOI
Kay W. Axhausen1, Claude Weis1
TL;DR: In this paper, the response burden of self-administered surveys was evaluated using an exante pre-diction of response burden, and the resulting response rates were found to be high.
Abstract: While the literature on survey methods (see Richardson et al, 1995 or Dillman, 2000 for rele-vant textbooks) discusses response burden, there seems to be no literature on its ex-ante pre-diction, nor on the resulting response rates. Still, market research firms have to estimate their interviewers’ time requirements in advance to be able to calculate a budget for a study. Using its point system, Ursula Raymann of the Zurich-based Gesellschaft fur Sozialforschung, rated a series of self-administered surveys (Table 1), which had been conducted by Axhausen and his collaborators at the Institute for Transport Planning and Systems (IVT). Table 1 Response burden: Points by question type and action Item Points Question or transition (up to 3 lines) 2 Each additional line 1 Closed yes/no answers 1 Simple numerical answer (e.g. year of birth) 1 Rating with up to 5 possibilities 2 Rating with more than 5 possibilities 3 Left, middle, right rating 2 Scales with 3 and more grades 2 Best of ranking with cards 4 Second and each additional best ranking 3 Answer to subquestions of up to 5 words 1 Answers to subquestion of up to 2 lines 2 a) Response to half-open question with ≤8 possibilities 2 Each additional one 2 b) Response to half-open question with ≥8 possibilities 4 Each additional one 3 Answer to “please specify” 2 First answer to an open question 6 Each additional answer to the open question 3 Mixing showcards 6 Giving/showing a card to the respondent 1 Per response category on a showcard 1 Filter 0.5 Branching 0.5 © Gesellschaft fur Sozialforschung, Zurich, 2006

61 citations

Journal ArticleDOI
TL;DR: The United States is a multi-racial and multi-cultural society and social scientists conducting surveys face one problem: they are dealing with people from different cultural backgrounds and have to make sure that the survey measures are comparable across subpopulations with different cultures.
Abstract: The United States is a multi-racial and multi-cultural society. Social scientists conducting surveys face one problem: they are dealing with people from different cultural backgrounds. Similar to the challenge of international studies, we have to make sure that the survey measures are comparable across subpopulations with different cultures. To ensure comparability, we should consider two important factors: (1) equivalence of presenting the measures (whether the presentation of the stimuli is equivalent and comparable across different cultures); (2) and equivalence of interpreting/responding to the measures (whether respondents would interpret and respond to the stimuli in the same way).

55 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20236
202212
202111
202014
201913
201821