scispace - formally typeset
Search or ask a question
Author

Rachel Ormston

Bio: Rachel Ormston is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 8957 citations.

Papers
More filters
Book
20 Dec 2013
TL;DR: The Foundations of Qualitative Research as mentioned in this paper The applications of qualitative methods to social research are discussed in detail in the context of qualitative research in the field of social science research, with a focus on the use of qualitative data.
Abstract: The Foundations of Qualitative Research - Rachel Ormston, Liz Spencer, Matt Barnard, Dawn Snape The Applications of Qualitative Methods to Social Research - Jane Ritchie and Rachel Ormston Design Issues - Jane Lewis and Carol McNaughton Nicholls Ethics of Qualitative Research - Stephen Webster, Jane Lewis and Ashley Brown Designing and Selecting Samples - Jane Ritchie, Jane Lewis, Gilliam Elam, Rosalind Tennant and Nilufer Rahim Designing Fieldwork - Sue Arthur, Martin Mitchell, Jane Lewis and Carol McNaughton Nicholls In-depth Interviews - Alice Yeo, Robin Legard, Jill Keegan, Kit Ward, Carol McNaughton Nicholls and Jane Lewis Focus Groups - Helen Finch, Jane Lewis, and Caroline Turley Observation - Carol McNaughton Nicholls, Lisa Mills and Mehul Kotecha Analysis: Principles and Processes - Liz Spencer, Jane Ritchie, Rachel Ormston, William O'Connor and Matt Barnard Traditions and approaches Analysis in practice - Liz Spencer, Jane Ritchie, William O'Connor, Gareth Morrell and Rachel Ormston Generalisability Writing up qualitative Research - Clarissa White, Kandy Woodfield, Jane Ritchie and Rachel Ormston

9,682 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.
Abstract: The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study. Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.

5,939 citations

Journal ArticleDOI
TL;DR: It is concluded that saturation should be operationalized in a way that is consistent with the research question(s), and the theoretical position and analytic framework adopted, but also that there should be some limit to its scope, so as to risk saturation losing its coherence and potency if its conceptualization and uses are stretched too widely.
Abstract: Saturation has attained widespread acceptance as a methodological principle in qualitative research. It is commonly taken to indicate that, on the basis of the data that have been collected or analysed hitherto, further data collection and/or analysis are unnecessary. However, there appears to be uncertainty as to how saturation should be conceptualized, and inconsistencies in its use. In this paper, we look to clarify the nature, purposes and uses of saturation, and in doing so add to theoretical debate on the role of saturation across different methodologies. We identify four distinct approaches to saturation, which differ in terms of the extent to which an inductive or a deductive logic is adopted, and the relative emphasis on data collection, data analysis, and theorizing. We explore the purposes saturation might serve in relation to these different approaches, and the implications for how and when saturation will be sought. In examining these issues, we highlight the uncertain logic underlying saturation—as essentially a predictive statement about the unobserved based on the observed, a judgement that, we argue, results in equivocation, and may in part explain the confusion surrounding its use. We conclude that saturation should be operationalized in a way that is consistent with the research question(s), and the theoretical position and analytic framework adopted, but also that there should be some limit to its scope, so as not to risk saturation losing its coherence and potency if its conceptualization and uses are stretched too widely.

4,750 citations

Journal ArticleDOI
TL;DR: Developing a universal quality standard for thematic analysis (TA) is complicated by the existence of numerous iterations of TA that differ paradigmatically, philosophically and procedurally.
Abstract: Developing a universal quality standard for thematic analysis (TA) is complicated by the existence of numerous iterations of TA that differ paradigmatically, philosophically and procedurally. This ...

1,787 citations

Journal ArticleDOI
TL;DR: This guide offers practical guidance for those who wish to apply the Theoretical Domains Framework to assess implementation problems and support intervention design, and provides a brief rationale for using a theoretical approach to investigate and address implementation problems.
Abstract: Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the authors’ collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention.

1,522 citations

Journal ArticleDOI
TL;DR: It is recommended that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy.
Abstract: Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields. A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques. Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms. We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy. Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

1,052 citations