scispace - formally typeset
Search or ask a question
Journal Article

Feedback in a clinical setting: A way forward to enhance student's learning through constructive feedback.

01 Jul 2017-Journal of Pakistan Medical Association (J Pak Med Assoc)-Vol. 67, Iss: 7, pp 1078-1084
TL;DR: The ultimate purpose of this paper is to provide sufficient information to the clinical directors that there is a need to establish a robust system for giving feedback to learners and to inform all the clinical educators with the skills required to provide constructive feedback to their learners.
Abstract: Feedback is considered as a dynamic process in which information about the observed performance is used to promote the desirable behaviour and correct the negative ones. The importance of feedback is widely acknowledged, but still there seems to be inconsistency in the amount, type and timing of feedback received from the clinical faculty. No significant effort has been put forward from the educator end to empower the learners with the skills of receiving and using the feedback effectively. Some institutions conduct faculty development workshops and courses to facilitate the clinicians on how best to deliver constructive feedback to the learners. Despite of all these struggles learners are not fully satisfied with the quality of feedback received from their busy clinicians. The aim of this paper is to highlight what actually feedback is, type and structure of feedback, the essential components of a constructive feedback, benefits of providing feedback, barriers affecting the provision of timely feedback and different models used for providing feedback. The ultimate purpose of this paper is to provide sufficient information to the clinical directors that there is a need to establish a robust system for giving feedback to learners and to inform all the clinical educators with the skills required to provide constructive feedback to their learners. For the literature review, we had used the key words glossary as: Feedback, constructive feedback, barriers to feedback, principles of constructive feedback, Models of feedback, reflection, self-assessment and clinical practice etc. The data bases for the search include: Cardiff University library catalogue, Pub Med, Google Scholar, Web of Knowledge and Science direct.
Citations
More filters
Journal ArticleDOI
TL;DR: A scoping review of tools designed to add structure to clinical teaching, with a thematic analysis to establish definitional clarity proposes that these tools be named “deliberate teaching tools” (DTTs) and defined as “frameworks that enable clinicians to have a purposeful and considered approach to teaching encounters by incorporating elements identified with good teaching practice.
Abstract: Purpose and method: We conducted a scoping review of tools designed to add structure to clinical teaching, with a thematic analysis to establish definitional clarity.Results: Six thousand and forty...

13 citations

Journal ArticleDOI
TL;DR: This article explored nursing students' perceptions of the feedback they received in clinical settings, at a district hospital in southern Namibia, and found that nursing students were not confident and did not feel free to practise clinical skills during practical placements because of the nature of their feedback.
Abstract: BACKGROUND Feedback was the backbone of educational interventions in clinical settings. However, it was generally misunderstood and demanding to convey out effectively. Nursing students were not confident and did not feel free to practise clinical skills during practical placements because of the nature of the feedback they received whilst in these placements. Moreover, they experienced feedback as a barrier to completing practical workbooks. OBJECTIVE The purpose of this article was to report on a qualitative study, which explored nursing students' perceptions of the feedback they received in clinical settings, at a district hospital. METHOD This study was conducted at a district hospital located in southern Namibia. An explorative qualitative design with an interpretivist perspective was followed. A total of 11 nursing students from two training institutions were recruited by purposive sampling and were interviewed individually. All interviews were audio recorded with a digital voice recorder followed by verbatim transcriptions, with the participants' permission. Thereafter, data were analysed manually by qualitative content analysis. RESULTS Themes that emerged as findings of this study are feedback is perceived as a teaching and learning process in clinical settings; participants perceived the different nature of feedback in clinical settings; participants perceived personal and interpersonal implications of feedback and there were strategies to improve feedback in clinical settings. CONCLUSION Nursing students appreciated the feedback they received in clinical settings, despite the challenges related to group feedback and the emotional reactions it provoked. Nursing students should be prepared to be more receptive to the feedback conveyed in clinical settings.

5 citations

Journal ArticleDOI
TL;DR: GCH Control Software as discussed by the authors was developed as a multifunctional desktop tool complementing GCH System 2.0-instrumented forearm crutches and monitors the applied loads, displaying real-time graphical and numerical information, and enabling the correction of inaccuracies through feedback technology during assisted gait.
Abstract: Background Measuring weight bearing is an essential aspect of clinical care for lower limb injuries such as sprains or meniscopathy surgeries. This care often involves the use of forearm crutches for partial loads progressing to full loads. Therefore, feasible methods of load monitoring for daily clinical use are needed. Objective The main objective of this study was to design an innovative multifunctional desktop load-measuring software that complements GCH System 2.0-instrumented forearm crutches and monitors the applied loads, displaying real-time graphical and numerical information, and enabling the correction of inaccuracies through feedback technology during assisted gait. The secondary objective was to perform a preliminary implementation trial. Methods The software was designed for indoor use (clinics/laboratories). This software translates the crutch sensor signal in millivolts into force units, records and analyzes data (10-80 Hz), and provides real-time effective curves of the loads exerted on crutches. It covers numerous types of extrinsic feedback, including visual, acoustic (verbal/beeps), concurrent, terminal, and descriptive feedback, and includes a clinical and research use database. An observational descriptive pilot study was performed with 10 healthy subjects experienced in bilateral assisted gait. The Wilcoxon matched-pairs signed-rank test was used to evaluate the load accuracy evolution of each subject (ie, changes in the loads exerted on crutches for each support) among various walks, which was interpreted at the 95% confidence level. Results GCH Control Software was developed as a multifunctional desktop tool complementing GCH System 2.0-instrumented forearm crutches. The pilot implementation of the feedback mechanism observed 96/100 load errors at baseline (walk 0, no feedback) with 7/10 subjects exhibiting crutch overloading. Errors ranged from 61.09% to 203.98%, demonstrating heterogeneity. The double-bar feedback found 54/100 errors in walk 1, 28/100 in walk 2, and 14/100 in walk 3. The first walk with double-bar feedback (walk 1) began with errors similar to the baseline walk, generally followed by attempts at correction. The Wilcoxon matched-pairs signed-rank test used to evaluate each subject's progress showed that all participants steadily improved the accuracy of the loads applied to the crutches. In particular, Subject 9 required extra feedback with two single-bar walks to focus on the total load. The participants also corrected the load balance between crutches and fluency errors. Three subjects made one error of load balance and one subject made six fluctuation errors during the three double-bar walks. The latter subject performed additional feedback with two balance-bar walks to focus on the load balance. Conclusions GCH Control Software proved to be useful for monitoring the loads exerted on forearm crutches, providing a variety of feedback for correcting load accuracy, load balance between crutches, and fluency. The findings of the complementary implementation were satisfactory, although clinical trials with larger samples are needed to assess the efficacy of the different feedback mechanisms and to select the best alternatives in each case.

2 citations

Journal ArticleDOI
25 Sep 2017
TL;DR: In this article, the authors present a modified version of the Pendleton's rules for giving feedback in echocardiographic training, where the trainer provides the trainee with balanced feedback when there is a suggestion for improvement.
Abstract: Dear Editor, Feedback is the cornerstone of effective clinical training, so that correct performances are reinforced, incorrect ones are modified, and a path toward progress is identified. Feedback provides trainees with information needed to minimize the gap between desired and actual performances and encourages them to rethink and improve their performance (1). The present article describes Pendleton’s rules and its benefits, criticisms, its modified form (Pendleton plus), and its application in echocardiographic training. Pendleton’s rules, which outline the usual process for giving feedback to trainees (2), include the following stages: Trainee states which items he/she has done well Trainer states which items the trainee has done well, and discusses with the trainee how these were performed well Trainee states which skills he feels should be performed differently Trainer states what the trainee has to do to improve the identified skills Trainee provides his practical performanceimproving program (3). According to these rules, the trainer provides the trainee with balanced feedback when there is a suggestion for improvement (2, 4). The trainee and the trainer first focus on the trainee’s strengths, then on his weaknesses, and then the trainer provides suggestions for improvement. Thus, strengths and weaknesses are equally considered, where strengths are reinforced, and the trainee is given the opportunity to evaluate his performance prior to receiving criticism, in a way to significantly reduce defense against received criticism. Stating his own limitations provides the trainee the opportunity to rethink, creating a safe environment for receiving feedback (2, 4, 5). For learning to happen, the trainer should go beyond merely stating what areas are lacking, and he should provide the trainee with corrective suggestions (4). However, there have been several criticisms exacted on these rules, including inflexibility, the providing of feedback in an artificial setting (2), impossibility of separating strong and weak points in many cases (5), hypocrisy, no consideration for constructive criticism and interactive discussion, time-consuming, allocation of little time to assess weaknesses (4), making the trainee anxious due to the delayed assessment of weak points (2, 4), describing events and inadequate analysis, absence of comment on how good a trainee’s performance is (6), and that in applying these rules the trainer often states either what needs to be changed or how this performance can be improved, and rarely both together (6). According to the conscious-competence model that has been designed for learning skills, when a trainer asks a trainee what he feels he has done well, he is referring to the conscious-competence stage in which the trainee has acquired the skill but has to profoundly focus on that skill when performing it. When the trainer cites any unmentioned items done well by the trainee, he is referring to the unconscious-competence stage, where the trainee has mastered the skill and performs it unconsciously, without thinking (the trainee can also perform other tasks at the same time). When the trainee is asked to state skills that need to be improved, this refers to the consciouscompetence stage, since the trainee is aware of these skills and the need to acquire them. When the trainer reviews items that need to be altered to enhance the trainee’s skill set, he refers to the unconscious-competence stage, since the trainee has no awareness of the intended skill (7, 8). Given these criticisms, the modified version of these

2 citations


Cites background from "Feedback in a clinical setting: A w..."

  • ...However, there have been several criticisms exacted on these rules, including inflexibility, the providing of feedback in an artificial setting (2), impossibility of separating strong and weak points in many cases (5), hypocrisy, no consideration for constructive criticism and interactive discussion, time-consuming, allocation of little time to assess weaknesses (4), making the trainee anxious due to the delayed assessment of weak points (2, 4), describing events and inadequate analysis, absence of comment on how good a trainee’s performance is (6), and that in applying these rules the trainer often states either what needs to be changed or how this performance can be improved, and rarely both together (6)....

    [...]

  • ...Stating his own limitations provides the trainee the opportunity to rethink, creating a safe environment for receiving feedback (2, 4, 5)....

    [...]

Dissertation
01 Jan 2018
TL;DR: Evaluation of the final prototype showed that the prototype is a successful design of an augmented reality application, allowing for reviewing via annotations of 3D garments designed via CAD software, but before this product can be implemented in the correct context, several aspects have to be improved and professionalized.
Abstract: An augmented reality application allowing for annotation of 3D models suited for CAD, focused on the fashion industry, has been designed. The goal of the annotation application is to improve the performance of fashion designers. To achieve this, research has been done in the domains of augmented reality, interaction methods for immersive XR, cloth simulation and feedback methods. A state of the art research was conducted, where several interaction methods were evaluated based on selected criteria. Based on the state of the art research, several ideas were developed that are suited for the context of the project. These ideas have been evaluated and selected. From the selection of the ideas, a specification has been made. The specification described to develop a prototype of an augmented reality application allowing for annotation, workout out in Unity using the Meta 2 AR device. The best interaction method that has been suited for the functionalities of the application has been determined after three prototype iterations. The interaction method that had proven to be best, was a self-developed ‘pushpin’ method, which makes use of the SLAM-tracking of the Meta 2 device. Evaluation of the final prototype showed that the prototype is a successful design of an augmented reality application, allowing for reviewing via annotations of 3D garments designed via CAD software. However, before this product can be implemented in the correct context, several aspects have to be improved and professionalized.

1 citations


Cites background from "Feedback in a clinical setting: A w..."

  • ...‘The format of feedback is often directly related to the context’ [26]....

    [...]

  • ...‘Timing and frequency of feedback are equally important for quality of feedback to be delivered’ [26]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.
Abstract: SUMMARY Review date: 1969 to 2003, 34 years. Background and context: Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Objectives: Review and synthesize existing evidence in educational science that addresses the question, ‘What are the features and uses of high-fidelity medical simulations that lead to most effective learning?’. Search strategy: The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the ‘grey literature’ were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Inclusion and exclusion criteria: Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data extraction: Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Data synthesis: Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. Headline results: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following:

3,176 citations

Journal ArticleDOI
TL;DR: Empirical findings on the imperfect nature of self-assessment are reviewed and several interventions aimed at circumventing the consequences of such flawed assessments are discussed; these include training people to routinely make cognitive repairs correcting for biasedSelf-assessments and requiring people to justify their decisions in front of their peers.
Abstract: Research from numerous corners of psychological inquiry suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. We review empirical findings on the imperfect nature of self-assessment and discuss implications for three real-world domains: health, education, and the workplace. In general, people's self-views hold only a tenuous to modest relationship with their actual behavior and performance. The correlation between self-ratings of skill and actual performance in many domains is moderate to meager-indeed, at times, other people's predictions of a person's outcomes prove more accurate than that person's self-predictions. In addition, people overrate themselves. On average, people say that they are "above average" in skill (a conclusion that defies statistical possibility), overestimate the likelihood that they will engage in desirable behaviors and achieve favorable outcomes, furnish overly optimistic estimates of when they will complete future projects, and reach judgments with too much confidence. Several psychological processes conspire to produce flawed self-assessments. Research focusing on health echoes these findings. People are unrealistically optimistic about their own health risks compared with those of other people. They also overestimate how distinctive their opinions and preferences (e.g., discomfort with alcohol) are among their peers-a misperception that can have a deleterious impact on their health. Unable to anticipate how they would respond to emotion-laden situations, they mispredict the preferences of patients when asked to step in and make treatment decisions for them. Guided by mistaken but seemingly plausible theories of health and disease, people misdiagnose themselves-a phenomenon that can have severe consequences for their health and longevity. Similarly, research in education finds that students' assessments of their performance tend to agree only moderately with those of their teachers and mentors. Students seem largely unable to assess how well or poorly they have comprehended material they have just read. They also tend to be overconfident in newly learned skills, at times because the common educational practice of massed training appears to promote rapid acquisition of skill-as well as self-confidence-but not necessarily the retention of skill. Several interventions, however, can be introduced to prompt students to evaluate their skill and learning more accurately. In the workplace, flawed self-assessments arise all the way up the corporate ladder. Employees tend to overestimate their skill, making it difficult to give meaningful feedback. CEOs also display overconfidence in their judgments, particularly when stepping into new markets or novel projects-for example, proposing acquisitions that hurt, rather then help, the price of their company's stock. We discuss several interventions aimed at circumventing the consequences of such flawed assessments; these include training people to routinely make cognitive repairs correcting for biased self-assessments and requiring people to justify their decisions in front of their peers. The act of self-assessment is an intrinsically difficult task, and we enumerate several obstacles that prevent people from reaching truthful self-impressions. We also propose that researchers and practitioners should recognize self-assessment as a coherent and unified area of study spanning many subdisciplines of psychology and beyond. Finally, we suggest that policymakers and other people who makes real-world assessments should be wary of self-assessments of skill, expertise, and knowledge, and should consider ways of repairing self-assessments that may be flawed.

1,599 citations

Journal ArticleDOI
12 Aug 1983-JAMA
TL;DR: In the setting of clinical medical education, feedback refers to information describing students' or house officers' performance in a given activity that is intended to guide their future performance in that same or in a related activity as mentioned in this paper.
Abstract: In the setting of clinical medical education, feedback refers to information describing students' or house officers' performance in a given activity that is intended to guide their future performance in that same or in a related activity. It is a key step in the acquisition of clinical skills, yet feedback is often omitted or handled improperly in clinical training. This can result in important untoward consequences, some of which may extend beyond the training period. Once the nature of the feedback process is appreciated, however, especially the distinction between feedback and evaluation and the importance of focusing on the trainees' observable behaviors rather than on the trainees themselves, the educational benefit of feedback can be realized. This article presents guidelines for offering feedback that have been set forth in the literature of business administration, psychology, and education, adapted here for use by teachers and students of clinical medicine. (JAMA1983;250:777-781)

1,159 citations

Journal ArticleDOI
TL;DR: A practical teaching tool is described that delineates and structures the skills which aid doctor‐patient communication, and provides detailed references to substantiate the research and theoretical basis of these individual skills.
Abstract: Effective communication between doctor and patient is a core clinical skill. It is increasingly recognized that it should and can be taught with the same rigour as other basic medical sciences. To validate this teaching, it is important to define the content of communication training programmes by stating clearly what is to be learnt. We therefore describe a practical teaching tool, the Calgary-Cambridge Referenced Observation Guides, that delineates and structures the skills which aid doctor-patient communication. We provide detailed references to substantiate the research and theoretical basis of these individual skills. The guides form the foundation of a sound communication curriculum and are offered as a starting point for programme directors, facilitators and learners at all levels. We describe how these guides can also be used on an everyday basis to help facilitators teach and students learn within the experiential methodology that has been shown to be central to communication training. The learner-centred and opportunistic approach used in communication teaching makes it difficult for learners to piece together their evolving understanding of communication. The guides give practical help in countering this problem by providing: an easily accessible aide-memoire; a recording instrument that makes feedback more systematic; and an overall conceptual framework within which to organize the numerous skills that are discovered one by one as the communication curriculum unfolds.

420 citations

Journal ArticleDOI
10 Nov 2008-BMJ
TL;DR: Think about a clinical teaching session that you supervised recently and how much feedback did you provide and how useful do you think your feedback was?
Abstract: Think about a clinical teaching session that you supervised recently. How much feedback did you provide? How useful do you think your feedback was?

359 citations