scispace - formally typeset
Search or ask a question
Author

Jim Cox

Bio: Jim Cox is an academic researcher from General Medical Council. The author has contributed to research in topics: Competence (human resources) & Peer review. The author has an hindex of 4, co-authored 4 publications receiving 162 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Modernization of medical regulation has included the introduction of the Professional Performance Procedures by the UK General Medical Council in 1995, which has the power to assess any registered practitioner whose performance may be seriously deficient, thus calling registration into question.
Abstract: Background Modernization of medical regulation has included the introduction of the Professional Performance Procedures by the UK General Medical Council in 1995. The Council now has the power to assess any registered practitioner whose performance may be seriously deficient, thus calling registration (licensure) into question. Problems arising from ill health or conduct are dealt with under separate programmes. Methods This paper describes the development of the assessment programmes within the overall policy framework determined by the Council. Peer review of performance in the workplace (Phase 1) is followed by tests of competence (Phase 2) to reflect the relationship between clinical competence and performance. The theoretical and research basis for the approach are presented, and the relationship between the qualitative methods in Phase 1 and the quantitative methods in Phase 2 explored. Conclusions The approach is feasible, has been implemented and has stood legal challenge. The assessors judge and report all the evidence they collect and may not select from it. All their judgements are included and the voice of the lay assessor is preserved. Taken together, the output from both phases forms an important basis for remediation and training should it be required.

75 citations

Journal ArticleDOI
TL;DR: The General Medical Council procedures to assess the performance of doctors who may be seriously deficient include peer review of the doctor’s practice at the workplace and tests of competence and skills.
Abstract: The General Medical Council procedures to assess the performance of doctors who may be seriously deficient include peer review of the doctor's practice at the workplace and tests of competence and skills. Peer reviews are conducted by three trained assessors, two from the same speciality as the doctor being assessed, with one lay assessor. The doctor completes a portfolio to describe his/her training, experience, the circumstances of practice and self rate his/her competence and familiarity in dealing with the common problems of his/her own discipline. The assessment includes a review of the doctor's medical records; discussion of cases selected from these records; observation of consultations for clinicians, or of relevant activities in non-clinicians; a tour of the doctor's workplace; interviews with at least 12 third parties (five nominated by the doctor); and structured interviews with the doctor. The content and structure of the peer review are designed to assess the doctor against the standards defined in Good Medical Practice, as applied to the doctor's speciality. The assessment methods are based on validated instruments and gather 700-1000 judgements on each doctor. Early experience of the peer review visits has confirmed their feasibility and effectiveness.

47 citations

Journal ArticleDOI
TL;DR: The development of the tests of competence used as part of the General Medical Council’s assessment of potentially seriously deficient doctors is described by reference to tests of knowledge and clinical and practical skills created for general practice.
Abstract: Objective This paper describes the development of the tests of competence used as part of the General Medical Council’s assessment of potentially seriously deficient doctors. It is illustrated by reference to tests of knowledge and clinical and practical skills created for general practice. Subjects and tests A notional sample of 30 volunteers in ‘good standing’ in the specialty (reference group), 27 practitioners referred to the procedures and four practitioners not referred but who were the focus of concern over their performance. Tests were constructed using available guidelines and a specially convened working group in the specialty. Methods Standards were set using Angoff, modified contrasting group and global judgement methods, as appropriate. Results Tests performed highly reliably, showed evidence of construct validity, intercorrelated at appropriate levels and, at the standards employed, demonstrated good separation of reference and referred groups. Likelihood ratios for above and below standard performance based on competence were large for each test. Seven of 27 doctors referred were shown not to be deficient in both phases of the performance assessment.

36 citations

Journal ArticleDOI
TL;DR: A comprehensive training programme for assessors has been developed that simulates the context of a typical practice‐based assessment and has been tailored for 12 medical specialties, and debriefing of assessors following real assessments has been strongly positive.
Abstract: From July 1997, the General Medical Council (GMC) has had the power to investigate doctors whose performance is considered to be seriously deficient. Assessment procedures have been developed for all medical specialties to include peer review of performance in practice and tests of competence. Peer review is conducted by teams of at least two medical assessors and one lay assessor. A comprehensive training programme for assessors has been developed that simulates the context of a typical practice-based assessment and has been tailored for 12 medical specialties. The training includes the principles of assessment, familiarization with the assessment instruments and supervised practice in assessment methods used during the peer review visit. High fidelity is achieved through the use of actors who simulate third party interviewees and trained doctors who role play the assessee. A subgroup of assessors, selected to lead the assessment teams, undergo training in handling group dynamics, report writing and in defending the assessment report against legal challenge. Debriefing of assessors following real assessments has been strongly positive with regard to their preparedness and confidence in undertaking the assessment.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Current views of the relationship between competence and performance are described and some of the implications of the distinctions between the two areas are delineated for the purpose of assessing doctors in practice.
Abstract: Objective This paper aims to describe current views of the relationship between competence and performance and to delineate some of the implications of the distinctions between the two areas for the purpose of assessing doctors in practice. Methods During a 2-day closed session, the authors, using their wide experiences in this domain, defined the problem and the context, discussed the content and set up a new model. This was developed further by e-mail correspondence over a 6-month period. Results Competency-based assessments were defined as measures of what doctors do in testing situations, while performance-based assessments were defined as measures of what doctors do in practice. The distinction between competency-based and performance-based methods leads to a three-stage model for assessing doctors in practice. The first component of the model proposed is a screening test that would identify doctors at risk. Practitioners who ‘pass’ the screen would move on to a continuous quality improvement process aimed at raising the general level of performance. Practitioners deemed to be at risk would undergo a more detailed assessment process focused on rigorous testing, with poor performers targeted for remediation or removal from practice. Conclusion We propose a new model, designated the Cambridge Model, which extends and refines Miller's pyramid. It inverts his pyramid, focuses exclusively on the top two tiers, and identifies performance as a product of competence, the influences of the individual (e.g. health, relationships), and the influences of the system (e.g. facilities, practice time). The model provides a basis for understanding and designing assessments of practice performance.

390 citations

Journal ArticleDOI
28 Sep 2002-BMJ
TL;DR: The origins and development of the competency approach are explored, its current role in medical training is evaluated, and its strengths and limitations are discussed.
Abstract: # Competency based medical training: review {#article-title-2} The competency approach has become prominent at most stages of undergraduate and postgraduate medical training in many countries. In the United Kingdom, for example, it forms part of the performance procedures of the General Medical Council (GMC),1 underpins objectively structured clinical examinations (OSCEs) and records of in-training assessment (RITA), and has been advocated for the selection of registrars in general practice and interviews. 2 3 It has become central to the professional lives of all doctors and is treated as if it were a panacea—but there is little consensus among trainees, trainers, and committees on what this approach entails. I aim to explore the origins and development of the competency approach, evaluate its current role in medical training, and discuss its strengths and limitations. #### Summary points The competency approach did not result directly from recent scandals of incompetent doctors. It originated from parallel developments in vocational training in many countries, such as the national qualifications framework in New Zealand, the national training board in Australia, the national skills standards initiative in the United States, and the national vocational qualifications (NVQs) in the United Kingdom.4 This movement was driven largely by the political perceived need to make the national workforce more competitive in the …

309 citations

Journal ArticleDOI
TL;DR: Multisource feedback (MSF), or 360-degree employee evaluation, is a questionnaire-based assessment method in which rates are evaluated by peers, patients, and coworkers on key performance behaviors, and is gaining acceptance as a quality improvement method in health systems.
Abstract: Multisource feedback (MSF), or 360-degree employee evaluation, is a questionnaire-based assessment method in which rates are evaluated by peers, patients, and coworkers on key performance behaviors. Although widely used in industrial settings to assess performance, the method is gaining acceptance as a quality improvement method in health systems. This article describes MSF, identifies the key aspects of MSF program design, summarizes some of the salient empirical research in medicine, and discusses possible limitations for MSF as an assessment tool in health care. In industry and in health care, experience suggests that MSF is most likely to succeed and result in changes in performance when attention is paid to structural and psychometric aspects of program design and implementation. A carefully selected steering committee ensures that the behaviors examined are appropriate, the communication package is clear, and the threats posed to individuals are minimized. The instruments that are developed must be tested to ensure that they are reliable, achieve a generalizability coefficient of Ep2 = .70, have face and content validity, and examine variance in performance ratings to understand whether ratings are attributable to how the physician performs and not to factors beyond the physician's control (e.g., gender, age, or setting). Research shows that reliable data can be generated with a reasonable number of respondents, and physicians will use the feedback to contemplate and initiate changes in practice. Performance may be affected by familiarity between rater and ratee and sociodemographic and continuing medical education characteristics; however, little of the variance in performance is explained by factors outside the physician's control. MSF is not a replacement for audit when clinical outcomes need to be assessed. However, when interpersonal, communication, professionalism, or teamwork behaviors need to be assessed and guidance given, it is one of the better tools that may be adopted and implemented to provide feedback and guide performance.

274 citations

Journal ArticleDOI
TL;DR: This chapter discusses how professional assessment can have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback.
Abstract: Background Good professional regulation depends on high quality procedures for assessing professional performance. Professional assessment can also have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback. Aim This paper sets out to define some of the fundamental principles of good assessment design. Conclusions It is essential to clarify the purpose of the assessment in question because this drives every aspect of its design. The intended focus for the assessment should be defined as specifically as possible. The scope of situations over which the result is intended to generalize should be established. Blueprinting may help the test designer to select a representative sample of practice across all the relevant aspects of performance and may also be used to inform the selection of appropriate assessment methods. An appropriately designed pilot study enables the test designer to evaluate feasibility, acceptability, validity (with respect to the intended focus) and reliability (with respect to the intended scope of generalization).

226 citations