scispace - formally typeset
Search or ask a question
Author

Brian Jolly

Bio: Brian Jolly is an academic researcher from University of Newcastle. The author has contributed to research in topics: Competence (human resources) & Objective structured clinical examination. The author has an hindex of 42, co-authored 121 publications receiving 5441 citations. Previous affiliations of Brian Jolly include Newcastle University & Queen Elizabeth II Jubilee Hospital.


Papers
More filters
Journal ArticleDOI
TL;DR: This large‐scale, interdisciplinary review of literature addressing supervision is the first from a medical education perspective to focus on clinical supervision in postgraduate and undergraduate medical education.
Abstract: Context Clinical supervision has a vital role in postgraduate and, to some extent, undergraduate medical education. However it is probably the least investigated, discussed and developed aspect of clinical education. This large-scale, interdisciplinary review of literature addressing supervision is the first from a medical education perspective. Purpose To review the literature on effective supervision in practice settings in order to identify what is known about effective supervision. Content The empirical basis of the literature is discussed and the literature reviewed to identify understandings and definitions of supervision and its purpose; theoretical models of supervision; availability, structure and content of supervision; effective supervision; skills and qualities of effective supervisors; and supervisor training and its effectiveness. Conclusions The evidence only partially answers our original questions and suggests others. The supervision relationship is probably the single most important factor for the effectiveness of supervision, more important than the supervisory methods used. Feedback is essential and must be clear. It is important that the trainee has some control over and input into the supervisory process. Finding sufficient time for supervision can be a problem. Trainee behaviours and attitudes towards supervision require more investigation; some behaviours are detrimental both to patient care and learning. Current supervisory practice in medicine has very little empirical or theoretical basis. This review demonstrates the need for more structured and methodologically sound programmes of research into supervision in practice settings so that detailed models of effective supervision can be developed and thereby inform practice.

746 citations

Journal ArticleDOI
TL;DR: This guide reviews what is known about educational and clinical supervision practice through a literature review and a questionnaire survey and identifies the need for a definition and for explicit guidelines on supervision.
Abstract: Background: This guide reviews what is known about educational and clinical supervision practice through a literature review and a questionnaire survey It identifies the need for a definition and for explicit guidelines on supervision There is strong evidence that, whilst supervision is considered to be both important and effective, practice is highly variable In some cases, there is inadequate coverage and frequency of supervision activities There is particular concern about lack of supervision for emergency and ‘out of hours work’, failure to formally address under-performance, lack of commitment to supervision and finding sufficient time for supervision There is a need for an effective system to address both poor performance and inadequate supervision Supervision is defined, in this guide as: ‘The provision of guidance and feedback on matters of personal, professional and educational development in the context of a trainee’s experience of providing safe and appropriate patient care’ A framework for effective supervision is provided: (1) Effective supervision should be offered in context; supervisors must be aware of local postgraduate training bodies’ and institutions’ requirements; (2) Direct supervision with trainee and supervisor working together and observing each other positively affects patient outcome and trainee development; (3) Constructive feedback is essential and should be frequent; (4) Supervision should be structured and there should be regular timetabled meetings The content of supervision meetings should be agreed and learning objectives determined at the beginning of the supervisory relationship Supervision contracts can be useful tools and should include detail regarding frequency, duration and content of supervision; appraisal and assessment; learning objectives and any specific requirements; (5) Supervision should include clinical management; teaching and research; management and administration; pastoral care; interpersonal skills; personal development; reflection; (6) The quality of the supervisory relationship strongly affects the effectiveness of supervision Specific aspects include continuity over time in the supervisory relationship, that the supervisees control the product of supervision (there is some suggestion that supervision is only effective when this is the case) and that there is some reflection by both participants The relationship is partly influenced by the supervisor’s commitment to teaching as well as both the attitudes and commitment of supervisor and trainee; (7) Training for supervisors needs to include some of the following: understanding teaching; assessment; counselling skills; appraisal; feedback; careers advice; interpersonal skills Supervisors (and trainees) need to understand that: (1) helpful supervisory behaviours include giving direct guidance on clinical work, linking theory and practice, engaging in joint problem-solving and offering feedback, reassurance and providing role models; (2) ineffective supervisory behaviours include rigidity; low empathy; failure to offer support; failure to follow supervisees’ concerns; not teaching; being indirect and intolerant and emphasizing evaluation and negative aspects; (3) in addition to supervisory skills, effective supervisors need to have good interpersonal skills, good teaching skills and be clinically competent and knowledgeable

437 citations

Journal ArticleDOI
TL;DR: Current views of the relationship between competence and performance are described and some of the implications of the distinctions between the two areas are delineated for the purpose of assessing doctors in practice.
Abstract: Objective This paper aims to describe current views of the relationship between competence and performance and to delineate some of the implications of the distinctions between the two areas for the purpose of assessing doctors in practice. Methods During a 2-day closed session, the authors, using their wide experiences in this domain, defined the problem and the context, discussed the content and set up a new model. This was developed further by e-mail correspondence over a 6-month period. Results Competency-based assessments were defined as measures of what doctors do in testing situations, while performance-based assessments were defined as measures of what doctors do in practice. The distinction between competency-based and performance-based methods leads to a three-stage model for assessing doctors in practice. The first component of the model proposed is a screening test that would identify doctors at risk. Practitioners who ‘pass’ the screen would move on to a continuous quality improvement process aimed at raising the general level of performance. Practitioners deemed to be at risk would undergo a more detailed assessment process focused on rigorous testing, with poor performers targeted for remediation or removal from practice. Conclusion We propose a new model, designated the Cambridge Model, which extends and refines Miller's pyramid. It inverts his pyramid, focuses exclusively on the top two tiers, and identifies performance as a product of competence, the influences of the individual (e.g. health, relationships), and the influences of the system (e.g. facilities, practice time). The model provides a basis for understanding and designing assessments of practice performance.

390 citations

Journal ArticleDOI
TL;DR: This chapter discusses how professional assessment can have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback.
Abstract: Background Good professional regulation depends on high quality procedures for assessing professional performance. Professional assessment can also have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback. Aim This paper sets out to define some of the fundamental principles of good assessment design. Conclusions It is essential to clarify the purpose of the assessment in question because this drives every aspect of its design. The intended focus for the assessment should be defined as specifically as possible. The scope of situations over which the result is intended to generalize should be established. Blueprinting may help the test designer to select a representative sample of practice across all the relevant aspects of performance and may also be used to inform the selection of appropriate assessment methods. An appropriately designed pilot study enables the test designer to evaluate feasibility, acceptability, validity (with respect to the intended focus) and reliability (with respect to the intended scope of generalization).

226 citations

Journal ArticleDOI
TL;DR: Workplace-based assessment (WBA) is complex, and has relied on a number of recently developed methods and instruments, of which some involve checklists and others use judgements made on rating scales as mentioned in this paper.
Abstract: Medical Education 2012: 46: 28–37 Context Historically, assessments have often measured the measurable rather than the important. Over the last 30 years, however, we have witnessed a gradual shift of focus in medical education. We now attempt to teach and assess what matters most. In addition, the component parts of a competence must be marshalled together and integrated to deal with real workplace problems. Workplace-based assessment (WBA) is complex, and has relied on a number of recently developed methods and instruments, of which some involve checklists and others use judgements made on rating scales. Given that judgements are subjective, how can we optimise their validity and reliability? Methods This paper gleans psychometric data from a range of evaluations in order to highlight features of judgement-based assessments that are associated with better validity and reliability. It offers some issues for discussion and research around WBA. It refers to literature in a selective way. It does not purport to represent a systematic review, but it does attempt to offer some serious analyses of why some observations occur in studies of WBA and what we need to do about them. Results and Discussion Four general principles emerge: the response scale should be aligned to the reality map of the judges; judgements rather than objective observations should be sought; the assessment should focus on competencies that are central to the activity observed, and the assessors who are best-placed to judge performance should be asked to participate.

185 citations


Cited by
More filters
Journal ArticleDOI
06 Sep 2006-JAMA
TL;DR: While suboptimal in quality, the preponderance of evidence suggests that physicians have a limited ability to accurately self-assess, and processes currently used to undertake professional development and evaluate competence may need to focus more on external assessment.
Abstract: ContextCore physician activities of lifelong learning, continuing medical education credit, relicensure, specialty recertification, and clinical competence are linked to the abilities of physicians to assess their own learning needs and choose educational activities that meet these needs.ObjectiveTo determine how accurately physicians self-assess compared with external observations of their competence.Data SourcesThe electronic databases MEDLINE (1966-July 2006), EMBASE (1980-July 2006), CINAHL (1982-July 2006), PsycINFO (1967-July 2006), the Research and Development Resource Base in CME (1978-July 2006), and proprietary search engines were searched using terms related to self-directed learning, self-assessment, and self-reflection.Study SelectionStudies were included if they compared physicians' self-rated assessments with external observations, used quantifiable and replicable measures, included a study population of at least 50% practicing physicians, residents, or similar health professionals, and were conducted in the United Kingdom, Canada, United States, Australia, or New Zealand. Studies were excluded if they were comparisons of self-reports, studies of medical students, assessed physician beliefs about patient status, described the development of self-assessment measures, or were self-assessment programs of specialty societies. Studies conducted in the context of an educational or quality improvement intervention were included only if comparative data were obtained before the intervention.Data ExtractionStudy population, content area and self-assessment domain of the study, methods used to measure the self-assessment of study participants and those used to measure their competence or performance, existence and use of statistical tests, study outcomes, and explanatory comparative data were extracted.Data SynthesisThe search yielded 725 articles, of which 17 met all inclusion criteria. The studies included a wide range of domains, comparisons, measures, and methodological rigor. Of the 20 comparisons between self- and external assessment, 13 demonstrated little, no, or an inverse relationship and 7 demonstrated positive associations. A number of studies found the worst accuracy in self-assessment among physicians who were the least skilled and those who were the most confident. These results are consistent with those found in other professions.ConclusionsWhile suboptimal in quality, the preponderance of evidence suggests that physicians have a limited ability to accurately self-assess. The processes currently used to undertake professional development and evaluate competence may need to focus more on external assessment.

2,141 citations

Journal ArticleDOI
TL;DR: Five sources – content, response process, internal structure, relationship to other variables and consequences – are noted by the Standards for Educational and Psychological Testing as fruitful areas to seek validity evidence.
Abstract: support or fail to support the proposed score interpretations, at a given point in time. Data and logic are assembled into arguments – pro and con – for some specific interpretation of assessment data. Examples of types of validity evidence, data and information from each source are discussed in the context of a high-stakes written and performance examination in medical education. Conclusion All assessments require evidence of the reasonableness of the proposed interpretation, as test data in education have little or no intrinsic meaning. The constructs purported to be measured by our assessments are important to students, faculty, administrators, patients and society and require solid scientific evidence of their meaning.

1,193 citations

Journal Article

970 citations

Journal ArticleDOI
TL;DR: In this paper, the authors use a utility model to illustrate that selecting an assessment method involves context-dependent compromises, and that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects.
Abstract: INTRODUCTION We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. EMPIRICAL AND THEORETICAL DEVELOPMENTS Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. ASSESSMENT AS INSTRUCTIONAL DESIGN When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. IMPLICATIONS FOR DEVELOPMENT AND RESEARCH Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.

958 citations