scispace - formally typeset
Open AccessJournal ArticleDOI

Assessing professional competence: from methods to programmes

Cees P. M. van der Vleuten
- 01 Mar 2005 - 
- Vol. 39, Iss: 3, pp 309-317
TLDR
In this paper, the authors use a utility model to illustrate that selecting an assessment method involves context-dependent compromises, and that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects.
Abstract
INTRODUCTION We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. EMPIRICAL AND THEORETICAL DEVELOPMENTS Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. ASSESSMENT AS INSTRUCTIONAL DESIGN When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. IMPLICATIONS FOR DEVELOPMENT AND RESEARCH Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.

read more

Content maybe subject to copyright    Report

Citations
More filters

Standards for educational and psychological testing

TL;DR: For example, Standardi pružaju okvir koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima.
Journal ArticleDOI

Achieving Desired Results and Improved Outcomes: Integrating Planning and Assessment Throughout Learning Activities

TL;DR: A conceptual model is proposed for planning and assessing continuous learning for physicians that the authors believe will help CME planners address issues of physician competence, physician performance, and patient health status.
Journal ArticleDOI

A model for programmatic assessment fit for purpose

TL;DR: A model for programmatic assessment in action is proposed, which simultaneously optimises assessment for learning and assessment for decision making about learner progress and enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirical research.
Journal ArticleDOI

Programmatic assessment: From assessment of learning to assessment for learning

TL;DR: In assessment, a considerable shift in thinking has occurred from assessment of learning to assessment for learning as mentioned in this paper, which has important implications for the conceptual framework from which to approach the issue of assessment, but also with respect to the research agenda.
Journal ArticleDOI

Tools for Direct Observation and Assessment of Clinical Skills of Medical Trainees: A Systematic Review

TL;DR: Although many tools are available for the direct observation of clinical skills, validity evidence and description of educational outcomes are scarce.
References
More filters

Standards for educational and psychological testing

TL;DR: For example, Standardi pružaju okvir koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima.
Journal ArticleDOI

The Assessment of Clinical skills/competence/performance

G E Miller
- 01 Sep 1990 - 
TL;DR: In this article, the authors propose a method to solve the problem of homonymity of homophily in the context of homomorphic data, and no abstracts are available.
Journal ArticleDOI

Defining and Assessing Professional Competence

TL;DR: An inclusive definition of competence is generated: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.
Journal ArticleDOI

The Interplay of Evidence and Consequences in the Validation of Performance Assessments

TL;DR: In this paper, the authors make a distinction between task-driven and construct-driven performance assessment, emphasizing the need for specialized validity criteria tailored for performance assessment and emphasizing the importance of domain coverage.
Journal ArticleDOI

Validity: on the meaningful interpretation of assessment data

TL;DR: Five sources – content, response process, internal structure, relationship to other variables and consequences – are noted by the Standards for Educational and Psychological Testing as fruitful areas to seek validity evidence.
Related Papers (5)