scispace - formally typeset
Search or ask a question
Author

Joanne Turner

Bio: Joanne Turner is an academic researcher. The author has contributed to research in topics: Competence (human resources) & Peer review. The author has an hindex of 2, co-authored 2 publications receiving 122 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Modernization of medical regulation has included the introduction of the Professional Performance Procedures by the UK General Medical Council in 1995, which has the power to assess any registered practitioner whose performance may be seriously deficient, thus calling registration into question.
Abstract: Background Modernization of medical regulation has included the introduction of the Professional Performance Procedures by the UK General Medical Council in 1995. The Council now has the power to assess any registered practitioner whose performance may be seriously deficient, thus calling registration (licensure) into question. Problems arising from ill health or conduct are dealt with under separate programmes. Methods This paper describes the development of the assessment programmes within the overall policy framework determined by the Council. Peer review of performance in the workplace (Phase 1) is followed by tests of competence (Phase 2) to reflect the relationship between clinical competence and performance. The theoretical and research basis for the approach are presented, and the relationship between the qualitative methods in Phase 1 and the quantitative methods in Phase 2 explored. Conclusions The approach is feasible, has been implemented and has stood legal challenge. The assessors judge and report all the evidence they collect and may not select from it. All their judgements are included and the voice of the lay assessor is preserved. Taken together, the output from both phases forms an important basis for remediation and training should it be required.

75 citations

Journal ArticleDOI
TL;DR: The General Medical Council procedures to assess the performance of doctors who may be seriously deficient include peer review of the doctor’s practice at the workplace and tests of competence and skills.
Abstract: The General Medical Council procedures to assess the performance of doctors who may be seriously deficient include peer review of the doctor's practice at the workplace and tests of competence and skills. Peer reviews are conducted by three trained assessors, two from the same speciality as the doctor being assessed, with one lay assessor. The doctor completes a portfolio to describe his/her training, experience, the circumstances of practice and self rate his/her competence and familiarity in dealing with the common problems of his/her own discipline. The assessment includes a review of the doctor's medical records; discussion of cases selected from these records; observation of consultations for clinicians, or of relevant activities in non-clinicians; a tour of the doctor's workplace; interviews with at least 12 third parties (five nominated by the doctor); and structured interviews with the doctor. The content and structure of the peer review are designed to assess the doctor against the standards defined in Good Medical Practice, as applied to the doctor's speciality. The assessment methods are based on validated instruments and gather 700-1000 judgements on each doctor. Early experience of the peer review visits has confirmed their feasibility and effectiveness.

47 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Current views of the relationship between competence and performance are described and some of the implications of the distinctions between the two areas are delineated for the purpose of assessing doctors in practice.
Abstract: Objective This paper aims to describe current views of the relationship between competence and performance and to delineate some of the implications of the distinctions between the two areas for the purpose of assessing doctors in practice. Methods During a 2-day closed session, the authors, using their wide experiences in this domain, defined the problem and the context, discussed the content and set up a new model. This was developed further by e-mail correspondence over a 6-month period. Results Competency-based assessments were defined as measures of what doctors do in testing situations, while performance-based assessments were defined as measures of what doctors do in practice. The distinction between competency-based and performance-based methods leads to a three-stage model for assessing doctors in practice. The first component of the model proposed is a screening test that would identify doctors at risk. Practitioners who ‘pass’ the screen would move on to a continuous quality improvement process aimed at raising the general level of performance. Practitioners deemed to be at risk would undergo a more detailed assessment process focused on rigorous testing, with poor performers targeted for remediation or removal from practice. Conclusion We propose a new model, designated the Cambridge Model, which extends and refines Miller's pyramid. It inverts his pyramid, focuses exclusively on the top two tiers, and identifies performance as a product of competence, the influences of the individual (e.g. health, relationships), and the influences of the system (e.g. facilities, practice time). The model provides a basis for understanding and designing assessments of practice performance.

390 citations

Journal ArticleDOI
TL;DR: Multisource feedback (MSF), or 360-degree employee evaluation, is a questionnaire-based assessment method in which rates are evaluated by peers, patients, and coworkers on key performance behaviors, and is gaining acceptance as a quality improvement method in health systems.
Abstract: Multisource feedback (MSF), or 360-degree employee evaluation, is a questionnaire-based assessment method in which rates are evaluated by peers, patients, and coworkers on key performance behaviors. Although widely used in industrial settings to assess performance, the method is gaining acceptance as a quality improvement method in health systems. This article describes MSF, identifies the key aspects of MSF program design, summarizes some of the salient empirical research in medicine, and discusses possible limitations for MSF as an assessment tool in health care. In industry and in health care, experience suggests that MSF is most likely to succeed and result in changes in performance when attention is paid to structural and psychometric aspects of program design and implementation. A carefully selected steering committee ensures that the behaviors examined are appropriate, the communication package is clear, and the threats posed to individuals are minimized. The instruments that are developed must be tested to ensure that they are reliable, achieve a generalizability coefficient of Ep2 = .70, have face and content validity, and examine variance in performance ratings to understand whether ratings are attributable to how the physician performs and not to factors beyond the physician's control (e.g., gender, age, or setting). Research shows that reliable data can be generated with a reasonable number of respondents, and physicians will use the feedback to contemplate and initiate changes in practice. Performance may be affected by familiarity between rater and ratee and sociodemographic and continuing medical education characteristics; however, little of the variance in performance is explained by factors outside the physician's control. MSF is not a replacement for audit when clinical outcomes need to be assessed. However, when interpersonal, communication, professionalism, or teamwork behaviors need to be assessed and guidance given, it is one of the better tools that may be adopted and implemented to provide feedback and guide performance.

274 citations

Journal ArticleDOI
TL;DR: This chapter discusses how professional assessment can have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback.
Abstract: Background Good professional regulation depends on high quality procedures for assessing professional performance. Professional assessment can also have a powerful educational impact by providing transparent performance criteria and returning structured formative feedback. Aim This paper sets out to define some of the fundamental principles of good assessment design. Conclusions It is essential to clarify the purpose of the assessment in question because this drives every aspect of its design. The intended focus for the assessment should be defined as specifically as possible. The scope of situations over which the result is intended to generalize should be established. Blueprinting may help the test designer to select a representative sample of practice across all the relevant aspects of performance and may also be used to inform the selection of appropriate assessment methods. An appropriately designed pilot study enables the test designer to evaluate feasibility, acceptability, validity (with respect to the intended focus) and reliability (with respect to the intended scope of generalization).

226 citations

Journal ArticleDOI
TL;DR: This instalment in the series on professional assessment summarises how peers are used in the evaluation process and whether their judgements are reliable and valid.
Abstract: Objective This instalment in the series on professional assessment summarises how peers are used in the evaluation process and whether their judgements are reliable and valid. Method The nature of the judgements peers can make, the aspects of competence they can assess and the factors limiting the quality of the results are described with reference to the literature. The steps in implementation are also provided. Results Peers are asked to make judgements about structured tasks or to provide their global impressions of colleagues. Judgements are gathered on whether certain actions were performed, the quality of those actions and/or their suitability for a particular purpose. Peers are used to assess virtually all aspects of professional competence, including technical and non-technical aspects of proficiency. Factors influencing the quality of those assessments are reliability, relationships, stakes and equivalence. Conclusion Given the broad range of ways peer evaluators can be used and the sizeable number of competencies they can be asked to judge, generalisations are difficult to derive and this form of assessment can be good or bad depending on how it is carried out.

210 citations