scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Sampling in Interview-Based Qualitative Research: A Theoretical and Practical Guide

02 Jan 2014-Qualitative Research in Psychology (Taylor & Francis)-Vol. 11, Iss: 1, pp 25-41
TL;DR: A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) deciding upon a sample size, through the conjoint consideration of epistemological and practical concerns; (3) selecting a sampling strategy, such as random sampling, convenience sampling, stratified sampling, cell sampling, quota sampling or a single-case selection strategy; and (4) sample sourcing, which includes
Abstract: Sampling is central to the practice of qualitative methods, but compared with data collection and analysis its processes have been discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) deciding upon a sample size, through the conjoint consideration of epistemological and practical concerns; (3) selecting a sampling strategy, such as random sampling, convenience sampling, stratified sampling, cell sampling, quota sampling or a single-case selection strategy; and (4) sample sourcing, which includes matters of advertising, incentivising, avoidance of bias, and ethical concerns pertaining to informed consent. The extent to which these four concerns are met and made explicit in a qualitative study has implications for its coherence, tran...

Summary (2 min read)

Introduction

  • Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little.
  • The extent to which these four concerns are met and made explicit in a qualitative study has implications for its coherence, transparency, impact and trustworthiness.

Keywords

  • Sampling, purposive sampling, random sampling, theoretical sampling, case study, stratified sampling, quota sampling, sample size, recruitment attention in methodological textbooks and journals than its centrality to the process warrants (Mason, 2002).
  • Therefore, all researchers must consider the homogeneity/heterogeneity trade-off for themselves and delineate a sample universe that is coherent with their research aims and questions, and with the amount of research resources they have at their disposal.
  • Of all qualitative methodologies, Grounded Theory puts most emphasis on being flexible about sample size as a project progresses (Glaser, 1978).
  • Sample size may be increased if ongoing data analysis leads the researcher to realise that he/she has omitted an important group or type of person from the original sample universe, who should be added to the sample in order to enhance the validity or transferability of the findings or theory (Silverman, 2010).
  • For qualitative research, the danger of convenience sampling is that if the sample universe is broad, unwarranted generalisations may be attempted from a convenience sample.

Purposive sampling strategies

  • Purposive sampling strategies are non-random ways of ensuring that particular categories of cases within a sampling universe are represented in the final sample of a project.
  • The rationale for employing a purposive strategy is that the researcher assumes, based on their a-priori theoretical understanding of the topic being studied, that certain categories of individuals may have a unique, different or important perspective on the phenomenon in question and their presence in the sample should be ensured (Mason, 2002; Trost, 1986).
  • Summarised below are stratified, cell, quota and theoretical sampling, which are all purposive strategies used in studies that employ multiple cases.
  • Following this I describe significant case, intensity, deviant case, extreme case and typical case sampling, which are purposive strategies that are best employed when selecting a single case study.

Stratified sampling

  • In a stratified sample, the researcher first selects the particular categories or groups of cases that he/she considers should be purposively included in the final sample.
  • Here, the variable of ‘with children/without children’ is added to the divorce study sampling framework.
  • As previously mentioned, to include a purposive sampling stratification there must be clear theoretical grounds for the categories used.
  • Cell sampling be filled when gaining sample.
  • This example is illustrated in Figure 3.

Quota sampling

  • The process of quota sampling is a more flexible strategy than stratified or cell sampling.
  • Instead of requiring fixed numbers of cases in particular categories, quota sampling sets out a series of categories and a minimum number of cases required for each one (Mason, 2002).
  • As the sample is gathered, these quota are monitored to establish whether they are being met.
  • A quota sample might list the following: 1. At least 10 couples whose experience pertains to moving their father into supported.
  • Accommodation, and at least 10 whose experience pertains to their mother.

Theoretical sampling

  • Theoretical sampling differs from the aforementioned purposive strategies, for it takes place during the collection and analysis of data, following provisional sampling and analysis of some data (Coyne, 1997; Strauss, 1987).
  • An intensity sampling strategy was taken for the purposes of selecting a case study on early adult crisis; from a previous interview sample, individuals were selected who had provided a comprehensive account of both the inner and outer dimensions of their crisis experience, following which one person was selected randomly from this information-rich group (Robinson & Smith, 2010b).
  • Ways of recruiting participants for interviews are only limited by a researcher’s ingenuity in how to disseminate the message of his/her research study to the sample universe.
  • Recruiting interviewees within organisations presents particular advertising challenges.
  • Having gained a list of interested individuals, consent from parents/guardians was received prior to arranging interviews.

The question of incentives and ‘respondent-driven sampling’

  • When recruiting people into an interview study, the question of whether to offer them a financial incentive for participation is a key decision that the researcher must make.
  • The benefits of incentives are that they increase the likelihood of participation by adding additional motivation, and also increase retention in longitudinal studies (Yancey, Ortega & Kumanyika, 2006).
  • These individuals are then given a number of ‘recruitment coupons’.
  • Key decisions to enhance rigour are the relationship of the sample to the sample universe, the appropriate choice of sampling strategy, the robustness of the sample sourcing approach, and the overall fit between research questions and total sample strategy.
  • This criterion requires the research to have theoretical or practical relevance beyond the sample used.

Summary

  • In summary, four points holistically encompass the challenge of sampling in interview-based qualitative studies: defining the sample universe, deciding on a sample size, selecting a sample strategy and sourcing cases.
  • An Expanded Sourcebook, 2nd edition, also known as Qualitative Data Analysis.
  • Psychological Methods, 4, 425-451. Smith, JA, Flowers, P & Larkin, M 2009, Interpretative phenomenological analysis: theory, method and research, Sage, London.
  • A sampling technique for qualitative studies, also known as Statistically non-representative stratified sampling.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Qualitative Research in Psychology, in press
1
Sampling in interview-based qualitative research: A theoretical and practical guide
Abstract
Sampling is central to the practice of qualitative methods, but compared with data collection and
analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative
interview-based research is presented and critically discussed in this article, which integrates theory
and process for the following: (1) Defining a sample universe, by way of specifying inclusion and
exclusion criteria for potential participation; (2) Deciding upon a sample size, through the conjoint
consideration of epistemological and practical concerns; (3) Selecting a sampling strategy, such as
random sampling, convenience sampling, stratified sampling, cell sampling, quota sampling or a
single-case selection strategy; and (4) Sample sourcing, which includes matters of advertising,
incentivising, avoidance of bias, and ethical concerns pertaining to informed consent. The extent to
which these four concerns are met and made explicit in a qualitative study has implications for its
coherence, transparency, impact and trustworthiness.
Keywords
Sampling, purposive sampling, random sampling, theoretical sampling, case study, stratified
sampling, quota sampling, sample size, recruitment

Qualitative Research in Psychology, in press
2
Sampling is an important component of qualitative research design that has been given less
attention in methodological textbooks and journals than its centrality to the process warrants
(Mason, 2002). In order to help fill this void, the current article aims to provide academics, students
and practitioners in Psychology with a theoretically-informed and practical guide to sampling for use
in research that employs interviewing as data collection. Recognised methods in qualitative
psychology that commonly use interviews as a data source include Interpretative Phenomenological
Analysis (IPA), Grounded Theory, Thematic Analysis, Content Analysis, and some forms of Narrative
Analysis. This article presents theoretical and practical concerns within the framework of four ‘pan-
paradigmatic’ points: (1) setting a sample universe, (2) selecting a sample size, (3) devising a sample
strategy and (4) sample sourcing. Table 1 summarises the principle features of these. All of the
aforementioned methods can be used in conjunction with this four-point approach to sampling.
(Insert Table 1)
Point 1: Defining a sample universe
The first key concern in the four-point approach is defining the sample universe (also called ‘study
population’ or ‘target population’). This is the totality of persons from which cases may legitimately
be sampled in an interview study. To delineate a sample universe, a set of inclusion criteria or
exclusion criteria, or a combination of both, must be specified for the study (Luborsky & Rubinstein,
1995; Patton, 1990). Inclusion criteria should specify an attribute that cases must possess to qualify
for the study (e.g. a study on domestic violence that specifies that participants must be women who
have suffered partner violence that was reported to the police or social services), while exclusion
criteria must stipulate attributes that disqualify a case from the study (e.g. a study on exercise that
stipulates that participants must not be smokers). Together, these criteria draw a boundary around
the sample universe, as illustrated in Figure 1.
(Insert Figure 1)
Homogeneity and heterogeneity in the sample universe
The more inclusion and exclusion criteria that are used to define a sample universe, and the more
specific these criteria are, the more homogenous the sample universe becomes. Sample universe
homogeneity can be sought along a variety of parameters, such as demographic homogeneity,
geographical homogeneity, physical homogeneity, psychological homogeneity or life history
homogeneity (see Table 2 for descriptions of these). The addition of exclusion or inclusion criteria in
these different domains increases sample homogeneity.

Qualitative Research in Psychology, in press
3
One of these forms of homogeneity - psychological homogeneity - is established if a criterion for
case inclusion is a particular mental ability, attitude or trait. In order to make case selection possible
based on this kind of criterion, quantitative data from questionnaires or tests can be used as a
sampling tool (Coleman, Williams & Martin, 1996). For example, Querstret and Robinson (2013)
gained quantitative data on the extent to which individuals self-report having a personality that
varies across different social contexts, and used this data to select individuals who were one
standard deviation or more above the mean for ‘cross-context variability’. These persons were then
interviewed for a qualitative study about the motivations for, and experiences of, varying behaviour
and personality according to social context.
(Insert Table 2)
The extent of sample universe homogeneity that a research study aims at is influenced by both
theoretical and practical factors. Theoretically, certain qualitative methods have a preference for
homogenous samples; for example Interpretative Phenomenological Analysis is explicit that
homogenous samples work best in conjunction with its philosophical foundations and analytical
processes (Smith, Flowers & Larkin, 2009). By maintaining a measure of sample homogeneity, IPA
studies remain contextualised within a defined setting, and any generalisation from the study is
made cautiously to that localised sample universe, and not beyond at any more speculative or
abstract level.
Conversely, there are approaches that aim to gain samples that are intentionally heterogeneous, for
example the variation sampling technique of Grounded Theory (Strauss and Corbin, 1998), or the
cross-contextual approach described by Mason (2002). The rationale for gaining a heterogeneous
sample is that any commonality found across a diverse group of cases is more likely to be a widely
generalizable phenomenon than a commonality found in a homogenous group of cases. Therefore,
heterogeneity of sample helps provide evidence that findings are not solely the preserve a particular
group, time or place, which can help establish whether a theory developed within one particular
context applies to other contexts (Mason, 2002).
Cross-cultural qualitative research is another instance where a demographically and geographically
heterogeneous sample may be called for. Such research selects individuals from different cultures in
order to compare them and search for similarities and differences (see Table 2). An example of
qualitative research conducted at such a scale was the EUROCARE study the sample universe was
comprised of persons caring for co-resident spouses with Alzheimer’s in 14 European countries
(Murray, Schneider, Banerjee, & Mann, 1999; Schneider, Murrary, Banerjee, & Mann, 1999). This

Qualitative Research in Psychology, in press
4
influential piece of research shows that cross-cultural qualitative research can be successfully
conducted with a culturally heterogeneous sample universe, if resources are available.
There are however challenges inherent in using a heterogeneous sample. The first is that findings
will be relatively removed from real-life settings, and the second is that the sheer diversity of data
may lessen the likelihood meaningful core cross-case themes being found during analysis. Therefore,
all researchers must consider the homogeneity/heterogeneity trade-off for themselves and
delineate a sample universe that is coherent with their research aims and questions, and with the
amount of research resources they have at their disposal.
The sample universe is not only a practical boundary that aids the process of sampling, it also
provides an important theoretical role in the analysis and interpretation process, by specifying what
a sample is a sample of, and thus defining who or what a study is about. The level of generality to
which a study’s findings is relevant and logically inferable is the sample universe (Mason, 2002), thus
the more clearly and explicitly a sample universe is described, the more valid and transparent any
generalisation can be. If a study does not define a sample universe, or makes claims beyond its own
sample universe, this undermines its credibility and coherence.
Point 2: Deciding on a sample size
The size of a sample used for a qualitative project is influenced by both theoretical and practical
considerations. The practical reality of research is that most studies require a provisional decision on
sample size at the initial design stage. Without a provisional number at the design stage, the
duration and required resource-allocation of the project cannot be ascertained, that makes planning
all but impossible. However a priori sample specification need not imply inflexibility instead of a
fixed number, an approximate sample size range can be given, with a minimum and a maximum.
Interview studies that have a nomothetic aim to develop or test general theory are to a degree
reliant on sample size to generalise (Robinson, 2012). Sample size is by no means the only factor
influencing generalisability, but it is part of the picture. O’Connor and Wolfe’s grounded theory of
midlife transition, which was based on interviews with a sample of 64 adults between the ages of 35
and 50 (O’Connor and Wolfe, 1987) Illustrates this point. A way of working with larger sample sizes
in qualitative research, which prevents analytical overload, is to combine separate studies together
into larger syntheses. For example, I recently combined findings from a series of three studies on the
topic of early adult crisis into a single analytical synthesis and single article. One contributing study
had a sample of 16 cases, the second had a sample of 8 cases, and the third employed a sample of

Qualitative Research in Psychology, in press
5
26 cases. These were analysed and reported as separate studies originally, before being combined
into the synthesis paper with a total sample of N=50 (Robinson, Wright & Smith, 2013).
Very large-scale qualitative interview projects include hundreds of individuals in their sample. For
example the aforementioned EUROCARE project employed a sample size of approximately 280 (20
persons of for each of 14 countries) (Murray, Schneider, Banerjee, & Mann, 1999), and the MIDUS
study (The Midlife in the United States Study) is a study that has involved over 700 structured
interviews (Wethington, 2000). While such projects do require time, money, many researchers and a
robust purposive sampling strategy (see below), they are achieved by breaking up the research into
smaller sub-studies that are initially analysed on their own terms before being aggregated together.
Interview research that has an idiographic aim typically seeks a sample size that is sufficiently small
for individual cases to have a locatable voice within the study, and for an intensive analysis of each
case to be conducted. For these reasons, researchers using Interpretative Phenomenological
Analysis are given a guideline of 3 to 16 participants for a single study, with the lower end of that
spectrum suggested for undergraduate projects and the upper end for larger-scale funded projects
(Smith, Flowers & Larkin, 2009). This sample size range provides scope for developing cross-case
generalities, while preventing the researcher being bogged down in data, and permitting individuals
within the sample to be given a defined identity, rather than being subsumed into an anonymous
part of a larger whole (Robinson & Smith, 2010).
Case study design is often referred to as a distinct kind of method that is separable from standard
qualitative method (e.g. Yin, 2009). In relation to interview-based case-studies, a more integrative
view is taken here, in which the decision to do a N=1 case study is a sample size decision to be taken
as part of the four-point rubric set out in this guide. The resulting case study can then be analysed
using an idiographic interview-focused method such as IPA. There are a number of different reasons
for choosing a sample size of 1, and Table 3 lists six of these; psychobiography, theoretical or
hermeneutic insight, theory-testing or construct-problematising, demonstration of possibility,
illustration of best practice and theory-exemplification. All of these warrant a sample size of 1 and
also require associated sample strategies, which are discussed later in this article.
These case study objectives are not mutually exclusive. An example of a paper that evidences
multiple aims is Sparke’s narrative analysis of the autobiography of cyclist Lance Armstrong (Sparkes,
2004). It includes aspects of psychobiography, hermeneutic insight and construct problematizing.
(Insert Table 3)

Citations
More filters
Journal ArticleDOI

2,707 citations

Journal ArticleDOI
TL;DR: It is recommended that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy.
Abstract: Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields. A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques. Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms. We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy. Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

1,052 citations

Journal ArticleDOI
TL;DR: In this article, the authors analyze how Industry 4.0 triggers changes in the business models of manufacturing SMEs (small and medium-sized enterprises), by conducting a qualitative research with a sample of 68 German SMEs from three industries (automotive suppliers, mechanical and plant engineering, as well as electrical engineering and ICT).

688 citations

Journal ArticleDOI
TL;DR: Making explicit the approach used for participant sampling provides improved methodological rigour as judged by the four aspects of trustworthiness provides a guide for novice researchers of how rigour may be addressed in qualitative research.
Abstract: BackgroundPurposive sampling has a long developmental history and there are as many views that it is simple and straightforward as there are about its complexity. The reason for purposive sampling ...

370 citations


Cites background or methods or result from "Sampling in Interview-Based Qualita..."

  • ...The process involves either identifying cases from new groups, which might amount to being a comparison or a contrast with other groups, or reshaping the sample into a new set of criteria as a result of the analysis, and in so doing replacing the original sampling strategy chosen a-priori (Draucker et al., 2007; Robinson, 2014)....

    [...]

  • ...To enhance credibility, care was taken to clearly, transparently and explicitly describe the logic used in selecting the sample subset (Robinson, 2014; Thorne, 2016)....

    [...]

  • ...Theoretical sampling is different by being part of the collection and analysis of the data, following provisional sampling and analysis of some data (Coyne, 1997; Robinson, 2014; Strauss, 1987)....

    [...]

  • ...A sample that is fully contextualised helps prevent unwarranted generalisation (Robinson, 2014)....

    [...]

  • ...The reasons for adopting a purposive strategy are based on the assumption that, given the aims and objectives of the study, specific kinds of people may hold different and important views about the ideas and issues at question and therefore need to be included in the sample (Mason, 2002; Robinson, 2014; Trost, 1986)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors propose a tool to help users to decide what would be a useful sample size for their particular context when investigating patterns across participants, based on the expected population theme prevalence of the least prevalent themes.
Abstract: Thematic analysis is frequently used to analyse qualitative data in psychology, healthcare, social research and beyond. An important stage in planning a study is determining how large a sample size may be required, however current guidelines for thematic analysis are varied, ranging from around 2 to over 400 and it is unclear how to choose a value from the space in between. Some guidance can also not be applied prospectively. This paper introduces a tool to help users think about what would be a useful sample size for their particular context when investigating patterns across participants. The calculation depends on (a) the expected population theme prevalence of the least prevalent theme, derived either from prior knowledge or based on the prevalence of the rarest themes considered worth uncovering, e.g. 1 in 10, 1 in 100; (b) the number of desired instances of the theme; and (c) the power of the study. An adequately powered study will have a high likelihood of finding sufficient themes of the desired prevalence. This calculation can then be used alongside other considerations. We illustrate how to use the method to calculate sample size before starting a study and achieved power given a sample size, providing tables of answers and code for use in the free software, R. Sample sizes are comparable to those found in the literature, for example to have 80% power to detect two instances of a theme with 10% prevalence, 29 participants are required. Increasing power, increasing the number of instances or decreasing prevalence increases the sample size needed. We do not propose this as a ritualistic requirement for study design, but rather as a pragmatic supporting tool to help plan studies using thematic analysis.

328 citations

References
More filters
Book
01 Oct 1984
TL;DR: In this article, buku ini mencakup lebih dari 50 studi kasus, memberikan perhatian untuk analisis kuantitatif, membahas lebah lengkap penggunaan desain metode campuran penelitian, and termasuk wawasan metodologi baru.
Abstract: Buku ini menyediakan sebuah portal lengkap untuk dunia penelitian studi kasus, buku ini menawarkan cakupan yang luas dari desain dan penggunaan metode studi kasus sebagai alat penelitian yang valid. Dalam buku ini mencakup lebih dari 50 studi kasus, memberikan perhatian untuk analisis kuantitatif, membahas lebih lengkap penggunaan desain metode campuran penelitian, dan termasuk wawasan metodologi baru.

78,012 citations

Book
12 Jan 1994
TL;DR: This book presents a step-by-step guide to making the research results presented in reports, slideshows, posters, and data visualizations more interesting, and describes how coding initiates qualitative data analysis.
Abstract: Matthew B. Miles, Qualitative Data Analysis A Methods Sourcebook, Third Edition. The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny Saldana, author of The Coding Manual for Qualitative Researchers. Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and comprehension. The Third Edition's presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting. Miles and Huberman's original research studies are profiled and accompanied with new examples from Saldana's recent qualitative work. The book's most celebrated chapter, "Drawing and Verifying Conclusions," is retained and revised, and the chapter on report writing has been greatly expanded, and is now called "Writing About Qualitative Research." Comprehensive and authoritative, Qualitative Data Analysis has been elegantly revised for a new generation of qualitative researchers. Johnny Saldana, The Coding Manual for Qualitative Researchers, Second Edition. The Second Edition of Johnny Saldana's international bestseller provides an in-depth guide to the multiple approaches available for coding qualitative data. Fully up-to-date, it includes new chapters, more coding techniques and an additional glossary. Clear, practical and authoritative, the book: describes how coding initiates qualitative data analysis; demonstrates the writing of analytic memos; discusses available analytic software; suggests how best to use the book for particular studies. In total, 32 coding methods are profiled that can be applied to a range of research genres from grounded theory to phenomenology to narrative inquiry. For each approach, Saldana discusses the method's origins, a description of the method, practical applications, and a clearly illustrated example with analytic follow-up. A unique and invaluable reference for students, teachers, and practitioners of qualitative inquiry, this book is essential reading across the social sciences. Stephanie D. H. Evergreen, Presenting Data Effectively Communicating Your Findings for Maximum Impact. This is a step-by-step guide to making the research results presented in reports, slideshows, posters, and data visualizations more interesting. Written in an easy, accessible manner, Presenting Data Effectively provides guiding principles for designing data presentations so that they are more likely to be heard, remembered, and used. The guidance in the book stems from the author's extensive study of research reporting, a solid review of the literature in graphic design and related fields, and the input of a panel of graphic design experts. Those concepts are then translated into language relevant to students, researchers, evaluators, and non-profit workers - anyone in a position to have to report on data to an outside audience. The book guides the reader through design choices related to four primary areas: graphics, type, color, and arrangement. As a result, readers can present data more effectively, with the clarity and professionalism that best represents their work.

41,986 citations

Book
01 Jan 1998
TL;DR: Theoretical Foundations and Practical Considerations for Getting Started and Techniques for Achieving Theoretical Integration are presented.
Abstract: Part I: Introduction to Grounded Theory of Anselm Strauss Chapter 1: Inspiration and Background Chapter 2: Theoretical Foundations Chapter 3: Practical Considerations for Getting Started Chapter 4: Prelude to Analysis Chapter 5: Strategies for Qualitative Data Analysis Chapter 6: Memos and Diagrams Chapter 7: Theoretical Sampling Chapter 8: Context Chapter 9: Process Chapter 10: Techniques for Achieving Theoretical Integration Chapter 11: The Use of Computer Programs in Qualitative Data Analysis Part II: Research Demonstration Project Chapter 12 Open Coding: Identifying Concepts Chapter 13: Developing Concepts in Terms of Their Properties and Dimensions Chapter 14: Analyzing Data for Context Chapter 15: Bringing Process Into the Analysis Chapter 16: Integrating Categories Part III: Finishing the Research Project Chapter 17: Writing Theses, Monographs, and Dissertations, and Giving Talks About Your Research Chapter 18: Criteria for Evaluation Chapter 19: Student Questions and Answers

33,113 citations


"Sampling in Interview-Based Qualita..." refers background in this paper

  • ...According to Grounded Theory, as the researcher collects data, analysis should proceed at the same time, not be left until later....

    [...]

  • ...Conversely, there are approaches that aim to gain samples that are intentionally heterogeneous, for example, the variation sampling technique of Grounded Theory (Strauss & Corbin 1998), or the cross-contextual approach described by Mason (2002)....

    [...]

  • ...The term theoretical sampling was originally associated with Grounded Theory, but its principles apply across other methods too (Mason 2002)....

    [...]

  • ...…analysis permits a researcher to make real-time judgements about whether further data collection is likely to produce any additional or novel contribution to the theory-development process and therefore whether further sample acquisition would be appropriate or not (Strauss & Corbin 1998)....

    [...]

  • ...Recognised methods in qualitative psychology that commonly use interviews as a data source include Interpretative Phenomenological Analysis (IPA), Grounded Theory, Thematic Analysis, Content Analysis and some forms of Narrative Analysis....

    [...]

Journal ArticleDOI
TL;DR: The Nature of Qualitative Inquiry Theoretical Orientations Particularly Appropriate Qualitative Applications as mentioned in this paper, and Qualitative Interviewing: Qualitative Analysis and Interpretation Enhancing the quality and credibility of qualitative analysis and interpretation.
Abstract: PART ONE: CONCEPTUAL ISSUES IN THE USE OF QUALITATIVE METHODS The Nature of Qualitative Inquiry Strategic Themes in Qualitative Methods Variety in Qualitative Inquiry Theoretical Orientations Particularly Appropriate Qualitative Applications PART TWO: QUALITATIVE DESIGNS AND DATA COLLECTION Designing Qualitative Studies Fieldwork Strategies and Observation Methods Qualitative Interviewing PART THREE: ANALYSIS, INTERPRETATION, AND REPORTING Qualitative Analysis and Interpretation Enhancing the Quality and Credibility of Qualitative Analysis

31,305 citations


"Sampling in Interview-Based Qualita..." refers background in this paper

  • ...To delineate a sample universe, a set of inclusion criteria or exclusion criteria, or a combination of both, must be specified for the study (Luborsky & Rubinstein 1995; Patton 1990)....

    [...]

  • ...Alternatively, if the aim is to demonstrate the possibility of a phenomenon, an extreme case strategy may be used, which locates someone who shows an extreme or unusual behaviour, ability or characteristic (Patton 1990)....

    [...]

Book
01 Jan 2008
TL;DR: In this paper, the authors present strategies for qualitative data analysis, including context, process and theoretical integration, and provide a criterion for evaluation of these strategies and answers to student questions and answers.
Abstract: Introduction -- Practical considerations -- Prelude to analysis -- Strategies for qualitative data analysis -- Introduction to context, process and theoretical integration -- Memos and diagrams -- Theoretical sampling -- Analyzing data for concepts -- Elaborating the analysis -- Analyzing data for context -- Bringing process into the analysis -- Integrating around a concept -- Writing theses, monographs, and giving talks -- Criterion for evaluation -- Student questions and answers to these.

31,251 citations

Frequently Asked Questions (12)
Q1. What contributions have the authors mentioned in the paper "Qualitative research in psychology, in press" ?

Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: ( 1 ) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation ; ( 2 ) Deciding upon a sample size, through the conjoint consideration of epistemological and practical concerns ; ( 3 ) Selecting a sampling strategy, such as random sampling, convenience sampling, stratified sampling, cell sampling, quota sampling or a single-case selection strategy ; and ( 4 ) Sample sourcing, which includes matters of advertising, incentivising, avoidance of bias, and ethical concerns pertaining to informed consent. The extent to which these four concerns are met and made explicit in a qualitative study has implications for its coherence, transparency, impact and trustworthiness. 

Respondent-driven sampling is a form of snowball sampling that gives financial incentives for recruiting others into the study (Heckathorn, 1997; Johnston & Sabin, 2010; Wang et al., 2005). 

The best way of justifying the use of convenience samples in qualitative research is by defining the sample universe as demographically and geographically local and thus restricting generalisation to that local level, rather than attempting decontextualized abstract claims. 

A way of working with larger sample sizes in qualitative research, which prevents analytical overload, is to combine separate studies together into larger syntheses. 

When recruiting people into an interview study, the question of whether to offer them a financial incentive for participation is a key decision that the researcher must make. 

Of all qualitative methodologies, Grounded Theory puts most emphasis on being flexible about sample size as a project progresses (Glaser, 1978). 

The problem of using this approach in quantitative research is that statistics function on the basis that samples are random, when they are typically not. 

The third criterion, ‘transparency’, is enhanced by being explicit in a final research report as to how all four points – sample universe, sample size, sample strategy and sample sourcing – were met. 

This female bias can be easily counteracted in a mixedgender purposive sampling frame that ensures male and female representation, but more subtle systematic biases in differences between participants and non-participants are harder to deal with. 

Key decisions to enhance rigour are the relationship of the sample to the sample universe, the appropriate choice of sampling strategy, the robustness of the sample sourcing approach, and the overall fit between research questions and total sample strategy. 

The importance of sampling to qualitative research validityAddressing the four sampling issues that have been outlined in this article is central to enhancing the validity of any particular interview study. 

The process involves either (a) locating cases from new groups of participants (for example a ‘comparison’ group to provideheterogeneity into the sample, or (b) re-structuring an already-gathered sample into a new set of categories that have emerged from analysis, and replacing any stratification/cells/quotas that were chosen a-priori (Draucker, Martsolf, Ross & Rusk, 2007). 

Trending Questions (1)
What is convenience sampling in a qualitative research?

Convenience sampling in qualitative research involves selecting participants based on their easy accessibility or proximity to the researcher, often leading to a non-random sample representation.