scispace - formally typeset
Search or ask a question
Author

Philip Tombleson

Bio: Philip Tombleson is an academic researcher from General Medical Council. The author has contributed to research in topics: Competence (human resources) & Objective structured clinical examination. The author has an hindex of 2, co-authored 2 publications receiving 66 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The development of the tests of competence used as part of the General Medical Council’s assessment of potentially seriously deficient doctors is described by reference to tests of knowledge and clinical and practical skills created for general practice.
Abstract: Objective This paper describes the development of the tests of competence used as part of the General Medical Council’s assessment of potentially seriously deficient doctors. It is illustrated by reference to tests of knowledge and clinical and practical skills created for general practice. Subjects and tests A notional sample of 30 volunteers in ‘good standing’ in the specialty (reference group), 27 practitioners referred to the procedures and four practitioners not referred but who were the focus of concern over their performance. Tests were constructed using available guidelines and a specially convened working group in the specialty. Methods Standards were set using Angoff, modified contrasting group and global judgement methods, as appropriate. Results Tests performed highly reliably, showed evidence of construct validity, intercorrelated at appropriate levels and, at the standards employed, demonstrated good separation of reference and referred groups. Likelihood ratios for above and below standard performance based on competence were large for each test. Seven of 27 doctors referred were shown not to be deficient in both phases of the performance assessment.

36 citations

Journal ArticleDOI
TL;DR: The steps taken to develop an appropriate list of ‘clinical problems’ used to define the content of the objective structured clinical examination (OSCE) component of the Professional and Linguistic Assessments Board (PLAB) examination are described.
Abstract: Introduction We describe the steps taken to develop an appropriate list of ‘clinical problems’ used to define the content of the objective structured clinical examination (OSCE) component of the Professional and Linguistic Assessments Board (PLAB) examination. Method A blueprint and list of 255 clinical problems was compiled by reviewing PLAB questions, published curricula of the UK Royal Colleges and other sources such as the General Medical Council’s own guidelines. This list was sent to a random sample of 251 successful PLAB candidates who were asked to rate the clinical problems using a scale of ‘seen frequently/seldom/never’ and to 120 members of the accident and emergency (A&E) specialists’ association who were asked to identify ‘important’ tasks. The list was further validated using activity data obtained for consecutive A&E attendances (934) and admissions (6130) at three hospitals. Results After two mailings, 131/251 (52%) former PLAB candidates and 89/120 (74%) A&E specialists replied. All of the 255 clinical problems were seen by some former candidates and were felt to be important by some A&E specialists. Of the 255 problems, 40 were neither rated as important nor as seen frequently/seldom by over 50% of respondents. The 255 clinical problems covered a mean 94% consecutive A&E attendances and 97·6% reasons for hospital admission. The correlation between clinical problems that were frequently encountered and those felt to be important was rho=0·38 (P < 0·01). Conclusion The clinical problems appear to be appropriate for defining the content of the PLAB OSCE. We suggest that our problem list is useful in that all the problems are seen by some senior house officers, are felt to be important by some A&E specialists and cover greater than or equal to 94% of the conditions for which patients both attend and are admitted from casualty. The correlation between clinical task importance and the frequency that they were seen was only moderate, partly reflecting the relative seriousness of some uncommon medical conditions, which should not be missed on clinical assessment. The content of the OSCE component of the PLAB examination is being reviewed in the light of the findings of this study. The limitations of the study are discussed.

34 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper focuses particularly on the latter with respect to ways of ensuring content validity and achieving acceptable levels of reliability in the OSCE format.
Abstract: The traditional clinical examination has been shown to have serious limitations in terms of its validity and reliability. The OSCE provides some answers to these limitations and has become very popular. Many variants on the original OSCE format now exist and much research has been done on various aspects of their use. Issues to be addressed relate to organization matters and to the quality of the assessment. This paper focuses particularly on the latter with respect to ways of ensuring content validity and achieving acceptable levels of reliability. A particular concern has been the demonstrable need for long examinations if high levels of reliability are to be achieved. Strategies for reducing the practical difficulties this raises are discussed. Standard setting methods for use with OSCEs are described.

435 citations

Journal ArticleDOI
TL;DR: Current views of the relationship between competence and performance are described and some of the implications of the distinctions between the two areas are delineated for the purpose of assessing doctors in practice.
Abstract: Objective This paper aims to describe current views of the relationship between competence and performance and to delineate some of the implications of the distinctions between the two areas for the purpose of assessing doctors in practice. Methods During a 2-day closed session, the authors, using their wide experiences in this domain, defined the problem and the context, discussed the content and set up a new model. This was developed further by e-mail correspondence over a 6-month period. Results Competency-based assessments were defined as measures of what doctors do in testing situations, while performance-based assessments were defined as measures of what doctors do in practice. The distinction between competency-based and performance-based methods leads to a three-stage model for assessing doctors in practice. The first component of the model proposed is a screening test that would identify doctors at risk. Practitioners who ‘pass’ the screen would move on to a continuous quality improvement process aimed at raising the general level of performance. Practitioners deemed to be at risk would undergo a more detailed assessment process focused on rigorous testing, with poor performers targeted for remediation or removal from practice. Conclusion We propose a new model, designated the Cambridge Model, which extends and refines Miller's pyramid. It inverts his pyramid, focuses exclusively on the top two tiers, and identifies performance as a product of competence, the influences of the individual (e.g. health, relationships), and the influences of the system (e.g. facilities, practice time). The model provides a basis for understanding and designing assessments of practice performance.

390 citations

Journal ArticleDOI
28 Sep 2002-BMJ
TL;DR: The origins and development of the competency approach are explored, its current role in medical training is evaluated, and its strengths and limitations are discussed.
Abstract: # Competency based medical training: review {#article-title-2} The competency approach has become prominent at most stages of undergraduate and postgraduate medical training in many countries. In the United Kingdom, for example, it forms part of the performance procedures of the General Medical Council (GMC),1 underpins objectively structured clinical examinations (OSCEs) and records of in-training assessment (RITA), and has been advocated for the selection of registrars in general practice and interviews. 2 3 It has become central to the professional lives of all doctors and is treated as if it were a panacea—but there is little consensus among trainees, trainers, and committees on what this approach entails. I aim to explore the origins and development of the competency approach, evaluate its current role in medical training, and discuss its strengths and limitations. #### Summary points The competency approach did not result directly from recent scandals of incompetent doctors. It originated from parallel developments in vocational training in many countries, such as the national qualifications framework in New Zealand, the national training board in Australia, the national skills standards initiative in the United States, and the national vocational qualifications (NVQs) in the United Kingdom.4 This movement was driven largely by the political perceived need to make the national workforce more competitive in the …

309 citations

Journal ArticleDOI
TL;DR: The theoretical aspects of the OSCE are addressed, exploring its historical development, its place within the range of assessment tools and its core applications, and more practical information on the process of implementing an OSCE is offered.
Abstract: The Objective Structured Clinical Examination (OSCE) was first described by Harden in 1975 as an alternative to the existing methods of assessing clinical performance (Harden et al. 1975). The OSCE was designed to improve the validity and reliability of assessment of performance, which was previously assessed using the long case and short case examinations. Since then the use of the OSCE has become widespread within both undergraduate and postgraduate clinical education. We recognise that the introduction of the OSCE into an existing assessment programme is a challenging process requiring a considerable amount of theoretical and practical knowledge. The two parts of this Guide are designed to assist all those who intend implementing the OSCE into their assessment systems. Part I addresses the theoretical aspects of the OSCE, exploring its historical development, its place within the range of assessment tools and its core applications. Part II offers more practical information on the process of implementing an OSCE, including guidance on developing OSCE stations, choosing scoring rubrics, training examiners and standardised patients and managing quality assurance processes. Together we hope these two parts will act as a useful resource both for those choosing to implement the OSCE for the first time and also those wishing to quality assure their existing OSCE programme.

259 citations