scispace - formally typeset
Open AccessJournal ArticleDOI

The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: a meta-analysis

Reads0
Chats0
TLDR
This article explored previously theorised reasons for the failure of school-based, universal social and emotional learning (SEL) programs and expanded upon the extant prior meta-analytic literature.
Abstract
This study expands upon the extant prior meta-analytic literature by exploring previously theorised reasons for the failure of school-based, universal social and emotional learning (SEL) programmes...

read more

Content maybe subject to copyright    Report

The impact of trial stage, developer involvement and international transferability on universal
social and emotional learning programme outcomes: A meta-analysis
Wigelsworth, M., Lendrum, A., Oldfield, J., Scott, A., ten Bokkel, I., Tate, K., & Emery, C.

The impact of trial stage, developer involvement and international transferability on universal
social and emotional learning programme outcomes: A meta-analysis
Abstract
This study expands upon the extant prior meta-analytic literature by exploring previously theorised
reasons for the failure of school-based, universal social and emotional learning (SEL) programmes to
produce expected results. Eighty-nine studies reporting the effects of school-based, universal SEL
programmes were examined for differential effects on the basis of: 1) stage of evaluation (efficacy or
effectiveness); 2) involvement from the programme developer in the evaluation (led, involved,
independent); and 3) whether the programme was implemented in its country of origin (home or
away). A range of outcomes were assessed including: social-emotional competence, attitudes
towards self, pro-social behaviour, conduct problems, emotional distress, academic achievement
and emotional competence. Differential gains across all three factors were shown, although not
always in the direction hypothesised. The findings from the current study demonstrate a revised
and more complex relationship between identified factors and dictate major new directions for the
field.
Key words: meta-analysis, socio-emotional, efficacy, developer, transferability

Literature Review
There is an emerging consensus that the role of the school should include supporting children’s
emotional education and development (Greenberg, 2010; Weare, 2010). This is often accomplished
through the implementation of universal social and emotional learning (SEL) programmes which aim
to improve learning, promote emotional well-being, and prevent problem behaviours through the
development of social and emotional competencies (Elias et al., 2001; Greenberg et al., 2003).
What is SEL?
Social and emotional learning (SEL) is represented by the promotion of five core competencies: self-
awareness; self-management; social awareness; relationship skills; and responsible decision-making
(Collaborative for Academic, Social and Emotional Learning, 2002). Although a broad definition
serves to encompass many aspects for the effective promotion of SEL, it does little to differentiate or
identify ‘essential ingredients’ of a programme of change. As a result, SEL is implemented through a
variety of formats, differing levels of training and support, varying degrees of intensity, and variation
in regards to the relative importance placed in each of the five core competencies. However, most
SEL programmes typically feature an explicit taught curriculum, delivered by teachers (with or
without additional coaching and technical support), and are delivered during school hours (examples
of SEL programmes can be seen at casel.org).
Currently, a wide range of SEL programmes feature in schools and classrooms across the world,
including in the USA (e.g. Greenberg, Kusche, Cook, & Quamma, 1995), Australia (e.g. Graetz et al.,
2008), across Europe (e.g. Holsen, Smith, & Frey, 2008), and in the UK (e.g. DfES, 2007). Although a
poor tool for assessing the complexity of any specific curriculum intervention or context, meta-
analytic approaches are required to support the hitherto theoretical assumption of the universality
of teachable SEL competencies. Indeed, recent meta-analyses in the United States (Durlak,
Weissberg, Dymicki, Taylor, & Schellinger, 2011) and the Netherlands (Sklad, Diekstra, Ritter, & Ben,

2012) have been used to suggest that high quality, well-implemented universal SEL interventions,
designed to broadly facilitate a range of intra- and inter-personal competencies, can lead to a range
of salient outcomes, including improved social and emotional skills, school attitudes and academic
performance, and reduced
mental health difficulties (Durlak, Weissberg, Dymnicki, Taylor, &
Schellinger, 2011; Sklad, Diekstra, Ritter & Ben, 2012; Wilson & Lipsey, 2007). However, individual
SEL programmes are not always able to produce the same impressive results indicated by these
meta-analyses when adopted and implemented by practitioners in schools (Social and Character
Development Research Consortium, 2010).
Research in prevention science suggests a number of possible reasons for this discrepancy, including
implementation failure (Durlak & DuPre, 2008), a reliance on the results of ‘early’ trials focusing on
the internal logic of intervention, rather than their ‘real world’ applicability (Flay, et al., 2005),
developer involvement in trials (Eisner, 2009) and a lack of cultural transferability of interventions
(Castro, Barrera, & Martinez, 2004). Although implementation fidelity is now recognised as an
important feature in the successful delivery of SEL programmes (included in Durlak et al.’s 2011
meta-analysis), there has been no similar empirical consideration of the other factors. Underlying
such explanations is an implicit assumption of a degree of invariance or ‘treatment’ approach in the
implementation of SEL programmes. Many consumers of educational research will recognise a
‘medical model’ of evaluation (typically involving experimental designs), an approach which is not
without debate (for a brief summary see Evans and Benefield (2001)). Accordingly, prior research in
educational evaluation has noted that such an approach is potentially limited, as associated
methodologies for investigation (neatly described by Elliott and Kushner (2007) as the, “statistical
aggregation of gross yield” (p.324)) will fail to capture complexities of the interactions within specific
contexts required to explain findings. Indeed, lack of process and implementation has been noted in
this particular field (Lendrum & Humphrey, 2010). However, suggested alternate directions (e.g.
anthropological, illuminative, case study (ibid)) can fail to capture the prevalence or magnitude of
trends, and, in their own way, also fail to uncover important lessons as to the successful

implementation of educational interventions. Therefore, there is an opportunity to extend prior
influential work (i.e. Durlak, Weissberg, Dymicki, Taylor, & Schellinger, 2011; Sklad, Diekstra, Ritter,
& Ben, 2012) utilising meta-analytic approaches, to examine key indicators potentially influencing
successful outcomes, and in doing so, to consider extent to which such techniques are useful in this
context.
Key indicators
In attempting to explain the high degree of variation between programmes in achieving successful
outcomes for children, this article now considers the rationale for exploring theoretically important
(e.g. as conceptualised by Lendrum and Wigelsworth (2013)), but often ignored factors, and
hypothesises their likely effect on SEL programme outcomes.
Stage of evaluation: Efficacy vs. effectiveness
An important (and often promoted) indicator as to the potential success of a programme is its
history of development and evaluation. Ideally, an intervention should be tested at several stages
between its initial development and its broad dissemination into routine practice (Greenberg,
Domitrovich, Graczyk, & Zins, 2005) and frameworks have been provided in recent literature to
enable this. For instance, drawn from evaluations of complex health interventions, Campbell et al.
(2000) provide guidance on specific sequential phases for developing interventions: developing
theory (pre-phase), modelling empirical relationships consistent with intended outcomes (phase I),
exploratory trialling (phase II), randomised control trials under optimum conditions (phase III), and
long term implementation in uncontrolled settings (phase IV). An intervention should pass through
all phases to be considered truly effective and evidence-based (Campbell et al., 2000).
An important distinction in Campbell et al.’s framework is the recognition that interventions are
typically first ‘formally’ evaluated under optimal conditions of delivery (phase III) (more broadly
referred to as efficacy trials (Flay, 1986)), such as with the provision of highly-trained and carefully

Citations
More filters
Journal ArticleDOI

An update on social and emotional learning outcome research

TL;DR: Mahoney, Durlak, and Weissberg as mentioned in this paper compare results from four large-scale meta-analyses of student outcomes related to participation in universal, school-based social and emotional l...
Journal ArticleDOI

Systemic social and emotional learning: Promoting educational success for all preschool to high school students.

TL;DR: Key concepts and evidence for systemic SEL are summarized and interrelated Theories of Action and resources developed by the Collaborative for Academic, Social, and Emotional Learning (CASEL) to implement and continuously improve systemic Sel in schools, districts, and states are explained.
Journal ArticleDOI

An empirical basis for linking social and emotional learning to academic performance

TL;DR: In this paper, the authors present an integrative model derived from the SEL logic model using a 3-wave (annual assessment, T1, T2, T3) longitudinal sample of 1626 9-12-year-old students attending 45 elementary schools in England, drawn from a major randomized trial of a universal SEL intervention.
Journal ArticleDOI

A cluster randomized controlled trial of the Promoting Alternative Thinking Strategies (PATHS) curriculum

TL;DR: Mixed findings suggest that social and emotional learning interventions such as PATHS may not be as efficacious when implemented outside their country of origin and evaluated in independent trials.
References
More filters
Journal ArticleDOI

Measuring inconsistency in meta-analyses

TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Journal ArticleDOI

Meta-Analysis in Clinical Trials*

TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.
Journal ArticleDOI

Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement

TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.
Book

Cochrane Handbook for Systematic Reviews of Interventions

TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Book

Statistical Methods for Meta-Analysis

TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What contributions have the authors mentioned in the paper "The impact of trial stage, developer involvement and international transferability on universal social and emotional learning programme outcomes: a meta-analysis" ?

This study expands upon the extant prior meta-analytic literature by exploring previously theorised reasons for the failure of school-based, universal social and emotional learning ( SEL ) programmes to produce expected results. Eighty-nine studies reporting the effects of school-based, universal SEL programmes were examined for differential effects on the basis of: 1 ) stage of evaluation ( efficacy or effectiveness ) ; 2 ) involvement from the programme developer in the evaluation ( led, involved, independent ) ; and 3 ) whether the programme was implemented in its country of origin ( home or away ). 

As there is little caution in the speed of acceptance and roll out of SEL programmes internationally, despite these gaps in knowledge, findings of the current study have a global significance and present an opportunity to shape future directions and address several key lines of enquiry. Further meta-analytical approaches ( e. g. by grouping studies into ‘ clinically meaningful units ’ ( Melendez-Torres, Bonell, & Thomas, 2015 ) of function and process ( Hawe, Shiell, & Riley, 2004 ) ( e. g. mode of delivery ) ) alongside more ‘ bottomup ’ approaches to examine the unique ecologies of individual classroom practices in more detail are advised. As SEL is a global phenomenon, the importance of additional work in understanding the significance of cultural validity specifically becomes increasingly important, given that results from the current study suggest that SEL programmes identified as successful can be rendered ineffective when transported to other countries. Aside from revising expectations of the likely effects that can be generated by an exported programme, there is arguably a wider methodological issue to be addressed when designing studies to assess transported programmes. 

Determining whether one category of study was significantly different (α <.05) from another (i.e. efficacy vs. effectiveness), was calculated using technique known as proportional overlap. 

there is arguably the potential for programmes to be adapted to cultural contexts to such an extent that they become, in effect, new programmes requiring re-validation, ideally through the use of an evidence framework such as Campbell et al.’s (2000) in order to test the underlying programme theory and internal validity. 

As SEL is a global phenomenon, the importance of additional work in understanding the significance of cultural validity specifically becomes increasingly important, given that results from the current study suggest that SEL programmes identified as successful can be rendered ineffective when transported to other countries. 

‘Cochrane protocols for systematic reviews of interventions’ (Higgins & Green, 2008) were adopted for the literature searches, coding, and analytical strategy. 

This is a relatively optimistic interpretation of the data, as this would imply that the only limiting factor in successful implementation is a lack of more established ‘ground work’ in key areas (such as those identified by Elliott and Mihalic (2004), prior to the introduction of the programme. 

To further investigate Eisner’s (2009) high fidelity hypotheses (implementation of a given intervention is of a higher quality in studies in which the programme developer is involved, leading to better results), a cross-tabulated analysis (developer involvement vs. issues with implementation) was conducted for all studies which reported implementation (n=61). 

For the conduct problems, academic achievement and emotional competence, more favourable effects were seen for studies coded as ‘away’. 

The cynical view proposes that the more favourable results in developer-led trials stem from systematic biases that influence decision-making during a study.