scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Peer assessment of professional behaviours in problem-based learning groups.

12 Jan 2017-Medical Education (Wiley)-Vol. 51, Iss: 4, pp 390-400
TL;DR: Peer assessment of professional behaviour within problem‐based learning groups can support learning and provide opportunities to identify and remediate problem behaviours.
Abstract: Context Peer assessment of professional behaviour within problem-based learning (PBL) groups can support learning and provide opportunities to identify and remediate problem behaviours. Objectives We investigated whether a peer assessment of learning behaviours in PBL is sufficiently valid to support decision making about student professional behaviours. Methods Data were available for two cohorts of students, in which each student was rated by all of their PBL group peers using a modified version of a previously validated scale. Following the provision of feedback to the students, their behaviours were again peer-assessed. A generalisability study was undertaken to calculate the students’ professional behaviour scores, sources of error that impacted the reliability of the assessment, changes in student rating behaviour, and changes in mean scores after the delivery of feedback. Results Peer assessment of professional learning behaviour was highly reliable for within-group comparisons (G = 0.81–0.87), but poor for across-group comparisons (G = 0.47–0.53). Feedback increased the range of ratings given by assessors and brought their mean ratings into closer alignment. More of the increased variance was attributable to assessee performance than to assessor stringency and hence there was a slight improvement in reliability, especially for comparisons across groups. Mean professional behaviour scores were unchanged. Conclusions Peer assessment of professional learning behaviours may be unreliable for decision making outside a PBL group. Faculty members should not draw conclusions from peer assessment about a student's behaviour compared with that of their peers in the cohort, and such a tool may not be appropriate for summative assessment. Health professional educators interested in assessing student professional behaviours in PBL groups might focus on opportunities for the provision of formative peer feedback and its impact on learning.

Summary (2 min read)

Jump to: [Background][Methods][Results] and [Conclusion]

Background

  • There are compelling reasons why medical students need to learn how to both give and receive feedback.
  • In Australia, being honest, objective and constructive when assessing the performance of colleagues, including students, is part of good medical practice.
  • 30 So far, research using a recognizable framework of validation 31 of the peer assessment of professional behaviours in the PBL tutorial groups has focused on establishing the internal structure of the assessment and its relationship with other variables of interest.
  • The design of the G study was not reported, and it was unclear whether this figure related to studentげs peer assessment scores within their own PBL group, or ゲデ┌SWミデゲげ ゲIラヴWゲ across all PBL groups.

Methods

  • Data were available for two cohorts of students who were learning in PBL groups.
  • Each student was rated by his or her PBL group peers on a modified version of a previously validated professional learning behavior scale.
  • Following provision of feedback to the students, their behaviours were further peer assessed.
  • A generalisability study was undertaken to calculate the ゲデ┌SWミデゲげ professional behaviours, sources of error that impacted the reliability of the assessment, changes in rating behaviour, and changes in mean scores after receiving feedback.

Results

  • Data on peer assessment rating were available for two separate cohorts within the same academic year on two occasions each.
  • Across both cohorts and both iterations the largest contributor to variance was variation in assessor stringency (Varj), followed by assessor subjectivity , followed by assessee differences (Varp).
  • Of these two Varp is proportionately greater than Varpj, which means that the 1 st year ゲデ┌SWミデゲげ ヴ;デキミェゲ ヮヴラ┗キSW マラヴW ヴWノキ;HノW Iラマヮ;ヴキゲラミゲ HWデ┘WWミ their peers than the 2 nd Year ゲデ┌SWミデゲげ ヴ;デキミェゲく.
  • This suggests the changes in variance of student ratings (Table 1 and 2) are more likely to be due to changed rating behaviour than changed professional behaviours.
  • 36 This is another way of understanding the low reliability co-efficient which synthesises the information about score precision and score spread.

Conclusion

  • A peer assessment tool measuring student professional learning behaviours in PBL groups is unreliable, and therefore not valid for decision-making outside a PBL group.
  • Faculty should not draw ;ミ┞ IラミIノ┌ゲキラミゲ aヴラマ デエW ヮWWヴ ;ゲゲWゲゲマWミデ ;Hラ┌デ ; ゲデ┌SWミデゲげ HWエ;┗キラ┌ヴ Iラマヮ;ヴWS ┘キデエ デエWキヴ ヮWWヴゲ キミ the cohort.
  • The ヮヴラ┗キゲキラミ ラa ; ゲ┌ママ;ヴ┞ ラa デエW ヮWWヴ aWWSH;Iニ エ;S ; SWマラミゲデヴ;HノW WaaWIデ ラミ ゲデ┌SWミデゲげ behaviour as peer assessors, by providing formative feedback on their own behaviour from their PBL group peers.
  • Health professional educators need to reframe the question of assessing professional behaviours in PBL groups to focus on opportunities for formative peer feedback and its impact on learning.
  • PBL Block 7 PBL Block 2 PBL Block 9 PBL Block 5 Receiving Peer Feedback 1 Receiving Peer Feedback 1 Receiving Peer Feedback 2 Receiving Peer Feedback 2 1 ST Year (n=281) 2nd Year (n=328) Giving Peer Assessment (pre) Giving Peer Assessment (post).

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is a repository copy of Peer assessment of professional behaviours in problem-based
learning groups..
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/110724/
Version: Submitted Version
Article:
Roberts, C., Jorm, C., Gentilcore, S. et al. (1 more author) (2017) Peer assessment of
professional behaviours in problem-based learning groups. Medical Education. ISSN
0308-0110
https://doi.org/10.1111/medu.13151
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
Reuse
Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright
exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy
solely for the purpose of non-commercial research or private study within the limits of fair dealing. The
publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White
Rose Research Online record for this item. Where records identify the publisher as the copyright holder,
users can verify any specific terms of use on the publisher’s website.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.

1
Peer assessment of professional behaviours in problem-based learning groups
Chris Roberts
Christine Jorm
Jim Crossley
Stacey Gentilcore

2
Abstract
Background
It is a common conception that peer assessment of professional behaviours within small group
activities such as problem-based learning (PBL) can be valid and reliable. Consequently poor student
scores may lead to referral to faculty based student support or disciplinary systems. We wished to
determine whether a multisource feedback tool measuring professional learning behaviours in PBL
groups is sufficiently valid for decision-making about student professional behaviours.
Methods
Data were available for two cohorts of students who were learning in PBL groups. Each student was
rated by his or her PBL group peers on a modified version of a previously validated professional
learning behavior scale. Following provision of feedback to the students, their behaviours were
further peer assessed. A generalisability study was undertaken to calculate the 
professional behaviours, sources of error that impacted the reliability of the assessment, changes in
rating behaviour, and changes in mean scores after receiving feedback.
Results
A peer assessment of 'professional' learning behaviour within a PBL groups was highly reliable for
'within group' comparison (G = 0.81-0.87) but poor for 'across group' comparison (G= 0.47 - 0.53).
This was because the stringency of fellow students as assessors was so variable and they are nested
within groups. Feedback increased the range of ratings given by assessors and brought their mean
ratings into closer alignment. More of the increased variance was attributable to assessee
performance rather than assessor stringency, so there was a slight improvement in reliability,
especially for comparisons across groups. Professional behaviour scores were unchanged.
Conclusion
A multisource feedback tool measuring professional learning behaviours in PBL groups is unreliable
for decision-making outside a PBL group. Faculty should not draw any conclusions from the peer
assessment about a students behaviour compared with their peers in the cohort. The provision of a
summary of the peer feedback had a demonstrable effect on student behaviour as peer assessors,
by providing formative feedback on their own behaviour from their PBL group peers, but not on
their own professional behaviour. Health professional educators need to reframe the question of
assessing professional behaviours in PBL groups to focus on opportunities for formative peer
feedback and its impact on learning.
Keywords. Peer Assessment, Problem Based Learning, Generalisability, Professional Behaviour.

3
Background
There are compelling reasons why medical students need to learn how to both give and receive
feedback. Feedback is a critical component of fitness to practice. Doctors are known to be reluctant
to report incompetent and impaired colleagues,
1, 2
yet the need for reporting often arises after a
sustained failure to give effective peer feedback. In the UK, colleague and patient feedback is one
element of the supporting information that the medical council requires doctors to collect and
reflect upon as part of the process of revalidation.
3
In Australia, being honest, objective and
constructive when assessing the performance of colleagues, including students, is part of good
medical practice.
4
Multisource feedback for doctors (also known as   ) with
comment from peers, supervisors, and other health professionals is becoming a common method of
collecting this kind of feedback in the work-based assessment of junior doctors in many parts of the
world.
5
6, 7
8
9
MSF provides an efficient, questionnaire-based assessment method that provides
feedback about clinical and non-clinical performance to trainees across specialties and is considered
valuable for both formative and summative assessments.
7
It is thought to lead to performance
improvement,
10
although individual factors, the context of the feedback, and the presence of
facilitation have a profound effect on the response. More broadly, resistance to accepting feedback
is considered a marker of unprofessional behaviour.
11
Some medical educators have incorporated the assessment of professional behaviour into the
medical school curriculum to offer the opportunity of early detection and timely remediation for
students who exhibit dysfunctional behaviour. Such behaviours are assessable by a variety of
methods.
12
Some medical schools provide a general reporting facility.
13
Yet students struggle with
reporting an unprofessional peer lest they bring harm to the peer, themselves, or the group they are
working in.
14
Students are often reluctant to give feedback on peers.
15
On the other hand, students
are seen as having a key role in driving learning, and thus should be generating and soliciting their
own feedback.
16
F          
        nd the desired behaviour.
17
Methods such as peer 360-degree feedback for assessing undergraduate medical students' personal
and professional behaviours are thought to have sufficient utility to be used summatively despite
student ambivalence towards judgments of these behaviours.
18
They are thought to have high
reliability and to provide stable estimates of error variance across independent cohorts of raters,
19
if
a sufficient number of observers are used.
20

4
Some schools have made use of problem-based learning (PBL) as an opportunity for peer assessment
of a range of professional behaviours.
14, 20
21
22
A learning environment such as that of PBL, with its
characteristic features of students working together in small group in a highly self-directed and
experiential learning activity, seems to be eminently suited to fostering appropriate professional
behaviours.
23
Whilst students may be concerned about peer assessment,
13
20, 24
the quality of the
contributions that students make during tutorials strongly affects the quality of the discussion and
therefore group functioning.
22
Whilst tutors have only limited time to observe each student,
students have many opportunities to observe each other.
13, 25
A number of studies have measured
tutorial group effectiveness based on student self-assessment of particular behaviours
26
27
.
28
However, the literature is equivocal as to whether such assessments are optimal in terms of
reliability and validity.
22
Methods of constructing and delivering constructive and objective peer
feedback can be taught.
29
For example, several years experience with peer assessment in Rochester
School of Medicine has demonstrated that peers can provide reliable, stable ratings of both work
habits (e.g. preparation, problem solving and initiative) and interpersonal attributes (e.g.
truthfulness, respect, integrity and empathy).
25
There is a scarcity of research investigating the validity of peer assessment tools within PBL.
30
So far,
research using a recognizable framework of validation
31
of the peer assessment of professional
behaviours in the PBL tutorial groups has focused on establishing the internal structure of the
assessment and its relationship with other variables of interest. Kamp et al., (2011) developed and
validated the Maastricht-Peer Activity Rating Scale (M-PARS), a tool measuring constructive,
motivational, and collaborative factors, in the PBL tutorial.
22
It was found to have a good model fit
using a confirmatory factor analysis, with high correlations between the three subscales. In addition,
generalizability studies were conducted in order to examine how many different peer ratings per
individual student were necessary to ensure a reliable evaluation of one student. When students
were evaluated by, at least, four of their peers, the G co-efficient was 0.77. However, the design of
the G study was not reported, and it was unclear whether this figure related to students peer
assessment scores within their own PBL group, or   across all PBL groups. Van Mook
et al.,
32
have shown that web-based peer assessment of professional behaviours is acceptable to
students and significantly increased the amount although not the quality of written feedback,
suggesting that five raters was optimal. Papinczak et al., (2007) developed a peer assessment
instrument, in which   C  of peer averaged scores across all PBL
groups ranged from 0.66 0.77. Other evidence of validity were reported demonstrating peer
averaged scores correlating moderately with tutor ratings initially (r = 0.40) and improving over time

Citations
More filters
27 Feb 1973
TL;DR: The dependability of behavioral measurements: theory of generalizability for scores and profiles is studied to establish whether these measurements can be trusted to be reliable in the real world.
Abstract: CROMBACH, Lee J., GLESER, Goldine C., NANDA, Harinder & RAJARATNAM, Nageswar. The dependability of behavioral measurements: theory of generalizability for scores and profiles. New York, John Wiley and Sons, 1972.

931 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an articulo disponible en la pagina web de la revista en la siguiente URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/ejed.12330
Abstract: Este articulo se encuentra disponible en la pagina web de la revista en la siguiente URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/ejed.12330

74 citations

Journal ArticleDOI
TL;DR: The key points for planning and practicing interprofessional facilitation within the classroom and clinical setting are outlined.
Abstract: Interprofessional education (IPE) is a critical approach for preparing students to enter the health workforce, where teamwork and collaboration are important competencies. IPE has been promoted by a number of international health organisations, as part of a redesign of healthcare systems to promote interprofessional teamwork, to enhance the quality of patient care, and improve health outcomes. In response, universities are beginning to create and sustain authentic and inclusive IPE activities, with which students can engage. A growing number of health professionals are expected to support and facilitate interprofessional student groups. Designing interprofessional learning activities, and facilitating interprofessional groups of students requires an additional layer of skills compared with uniprofessional student groups. This article outlines the key points for planning and practicing interprofessional facilitation within the classroom and clinical setting.

70 citations

Journal ArticleDOI
TL;DR: This systematic review indicates that peer feedback in a collaborative learning environment may be a reliable assessment for professionalism and may aid in the development of professional behavior.
Abstract: Peer evaluation can provide valuable feedback to medical students, and increase student confidence and quality of work. The objective of this systematic review was to examine the utilization, effectiveness, and quality of peer feedback during collaborative learning in medical education. The PRISMA statement for reporting in systematic reviews and meta-analysis was used to guide the process of conducting the systematic review. Evaluation of level of evidence (Colthart) and types of outcomes (Kirkpatrick) were used. Two main authors reviewed articles with a third deciding on conflicting results. The final review included 31 studies. Problem-based learning and team-based learning were the most common collaborative learning settings. Eleven studies reported that students received instruction on how to provide appropriate peer feedback. No studies provided descriptions on whether or not the quality of feedback was evaluated by faculty. Seventeen studies evaluated the effect of peer feedback on professionalism; 12 of those studies evaluated its effectiveness for assessing professionalism and eight evaluated the use of peer feedback for professional behavior development. Ten studies examined the effect of peer feedback on student learning. Six studies examined the role of peer feedback on team dynamics. This systematic review indicates that peer feedback in a collaborative learning environment may be a reliable assessment for professionalism and may aid in the development of professional behavior. The review suggests implications for further research on the impact of peer feedback, including the effectiveness of providing instruction on how to provide appropriate peer feedback.

62 citations

Journal ArticleDOI
TL;DR: This paper is informed by both educational theory, and the extensive, seven year experience of the first and last authors in designing, implementing, facilitating and evaluating TBL at a large medical school.
Abstract: Team-based learning (TBL) provides an active, structured form of small group learning, that can be applied to large classes. Student accountability is achieved through the specific steps of TBL, including pre-class preparation, readiness assurance testing, problem-solving activities, and immediate feedback. Globally, a growing number of healthcare faculties have adopted TBL in a variety of combinations, across diverse settings and content areas. This paper provides a succinct overview of TBL and guidance for teachers towards successful design and implementation of TBL within health professional education. It also offers guidance for students participating in TBL. The paper is informed by both educational theory, and the extensive, seven year experience of the first and last authors in designing, implementing, facilitating and evaluating TBL at a large medical school.

45 citations

References
More filters
27 Feb 1973
TL;DR: The dependability of behavioral measurements: theory of generalizability for scores and profiles is studied to establish whether these measurements can be trusted to be reliable in the real world.
Abstract: CROMBACH, Lee J., GLESER, Goldine C., NANDA, Harinder & RAJARATNAM, Nageswar. The dependability of behavioral measurements: theory of generalizability for scores and profiles. New York, John Wiley and Sons, 1972.

931 citations

Journal ArticleDOI
TL;DR: In this paper, the authors develop and analyse two models of feedback: the first is based on the origins of the term in the disciplines of engineering and biology, and the second draws on ideas of sustainable assessment.
Abstract: Student feedback is a contentious and confusing issue throughout higher education institutions. This paper develops and analyses two models of feedback: the first is based on the origins of the term in the disciplines of engineering and biology. It positions teachers as the drivers of feedback. The second draws on ideas of sustainable assessment. This positions learners as having a key role in driving learning, and thus generating and soliciting their own feedback. It suggests that the second model equips students beyond the immediate task and does not lead to false expectations that courses cannot deliver. It identifies the importance of curriculum design in creating opportunities for students to develop the capabilities to operate as judges of their own learning.

913 citations

Journal ArticleDOI
John Sandars1
TL;DR: There is little research evidence to suggest that reflection improves quality of care but the process of care can be enhanced, and guided reflection, with supportive challenge from a mentor or facilitator, is important.
Abstract: Reflection is a metacognitive process that creates a greater understanding of both the self and the situation so that future actions can be informed by this understanding. Self-regulated and lifelo...

811 citations

Journal ArticleDOI
TL;DR: Context Problem‐based learning (PBL) is widely used in higher education but in educational practice problems are often encountered, such as tutors who are too directive, problems that are too well‐structured, and dysfunctional tutorial groups.
Abstract: Context Problem-based learning (PBL) is widely used in higher education. There is evidence available that students and faculty are highly satisfied with PBL. Nevertheless, in educational practice problems are often encountered, such as tutors who are too directive, problems that are too well-structured, and dysfunctional tutorial groups. Purpose The aim of this paper is to demonstrate that PBL has the potential to prepare students more effectively for future learning because it is based on four modern insights into learning: constructive, self-directed, collaborative and contextual. These four learning principles are described and it is explained how they apply to PBL. In addition, available research is reviewed and the current debate in research on PBL is described. Discussion It is argued that problems encountered in educational practice usually stem from poor implementation of PBL. In many cases the way in which PBL is implemented is not consistent with the current insights on learning. Furthermore, it is argued that research on PBL should contribute towards a better understanding of why and how the concepts of constructive, self-directed, collaborative and contextual learning work or do not work and under what circumstances. Examples of studies are given to illustrate this issue.

769 citations

Journal ArticleDOI
TL;DR: Logistic regression analysis showed that disciplined physicians were more likely to have Concern/Problem/Extreme excerpts in their medical school file, and problematic behavior in medical school is associated with subsequent disciplinary action by a state medical board.
Abstract: Purpose. To determine if medical students who demonstrate unprofessional behavior in medical school are more likely to have subsequent state board disciplinary action. Method. A case– control study was conducted of all University of California, San Francisco, School of Medicine graduates disciplined by the Medical Board of California from 1990 –2000 (68). Control graduates (196) were matched by medical school graduation year and specialty choice. Predictor variables were male gender, undergraduate grade point average, Medical College Admission Test scores, medical school grades, National Board of Medical Examiner Part 1 scores, and negative excerpts describing unprofessional behavior from course evaluation forms, dean’s letter of recommendation for residencies, and administrative correspondence. Negative excerpts were scored for severity (Good/Trace versus Concern/Problem/Extreme). The outcome variable was state board disciplinary action. Results. The alumni graduated between 1943 and 1989. Ninety-five percent of the disciplinary actions were for deficiencies in professionalism. The prevalence of Concern/ Problem/Extreme excerpts in the cases was 38% and 19% in controls. Logistic regression analysis showed that disciplined physicians were more likely to have Concern/Problem/Extreme excerpts in their medical school file (odds ratio, 2.15; 95% confidence interval, 1.15– 4.02; p .02). The remaining variables were not associated with disciplinary action. Conclusion. Problematic behavior in medical school is associated with subsequent disciplinary action by a state medical board. Professionalism is an essential competency that must be demonstrated for a student to graduate from medical school. Acad Med. 2004;79:244 –249.

452 citations

Frequently Asked Questions (1)
Q1. What have the authors contributed in "Peer assessment of professional behaviours in problem-based learning groups" ?

The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version refer to the White Rose Research Online record for this item.