scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Measuring Procedural Knowledge More Simply with a Single-Response Situational Judgment Test

05 Apr 2009-Journal of Business and Psychology (Springer US)-Vol. 24, Iss: 3, pp 281-288
TL;DR: In this article, the authors describe the development of a situational judgment test (SJT) based on single-response options developed directly from critical incidents and report a study that tested the SJT's concurrent validity against ratings of job performance.
Abstract: Purpose This paper describes the development of a situational judgment test (SJT) based on single-response options developed directly from critical incidents and reports a study that tested the SJT’s concurrent validity against ratings of job performance
Citations
More filters
Journal ArticleDOI
TL;DR: Research on the validity of knowledge tests, low-f fidelity simulations, and high-fidelity simulations in advanced-level high-stakes settings is integrated and a model and hypotheses of how these 3 predictors work in combination to predict job performance were developed.
Abstract: In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations in advanced-level high-stakes settings. A model and hypotheses of how these 3 predictors work in combination to predict job performance were developed. In a sample of 196 applicants, all 3 predictors were significantly related to job performance. Both the SJT and the AC had incremental validity over the knowledge test. Moreover, the AC had incremental validity over the SJT. Model tests showed that the SJT fully mediated the effects of declarative knowledge on job performance, whereas the AC partially mediated the effects of the SJT.

156 citations

Journal ArticleDOI
TL;DR: Evidence consistently shows that SJTs used in medical selection have good reliability, and predict performance across a range of medical professions, including performance in general practice, in early years (foundation training as a junior doctor) and for medical school admissions.
Abstract: Why use SJTs? Traditionally, selection into medical education professions has focused primarily upon academic ability alone. This approach has been questioned more recently, as although academic attainment predicts performance early in training, research shows it has less predictive power for demonstrating competence in postgraduate clinical practice. Such evidence, coupled with an increasing focus on individuals working in healthcare roles displaying the core values of compassionate care, benevolence and respect, illustrates that individuals should be selected on attributes other than academic ability alone. Moreover, there are mounting calls to widen access to medicine, to ensure that selection methods do not unfairly disadvantage individuals from specific groups (e.g. regarding ethnicity or socio-economic status), so that the future workforce adequately represents society as a whole. These drivers necessitate a method of assessment that allows individuals to be selected on important non-academic attributes that are desirable in healthcare professionals, in a fair, reliable and valid way. What are SJTs? Situational judgement tests (SJTs) are tests used to assess individuals' reactions to a number of hypothetical role-relevant scenarios, which reflect situations candidates are likely to encounter in the target role. These scenarios are based on a detailed analysis of the role and should be developed in collaboration with subject matter experts, in order to accurately assess the key attributes that are associated with competent performance. From a theoretical perspective, SJTs are believed to measure prosocial Implicit Trait Policies (ITPs), which are shaped by socialisation processes that teach the utility of expressing certain traits in different settings such as agreeable expressions (e.g. helping others in need), or disagreeable actions (e.g. advancing ones own interest at others, expense). Are SJTs reliable, valid and fair? Several studies, including good quality meta-analytic and longitudinal research, consistently show that SJTs used in many different occupational groups are reliable and valid. Although there is over 40 years of research evidence available on SJTs, it is only within the past 10 years that SJTs have been used for recruitment into medicine. Specifically, evidence consistently shows that SJTs used in medical selection have good reliability, and predict performance across a range of medical professions, including performance in general practice, in early years (foundation training as a junior doctor) and for medical school admissions. In addition, SJTs have been found to have significant added value (incremental validity) over and above other selection methods such as knowledge tests, measures of cognitive ability, personality tests and application forms. Regarding differential attainment, generally SJTs have been found to have lower adverse impact compared to other selection methods, such as cognitive ability tests. SJTs have the benefit of being appropriate both for use in selection where candidates are novices (i.e. have no prior role experience or knowledge such as in medical school admissions) as well as settings where candidates have substantial job knowledge and specific experience (as in postgraduate recruitment for more senior roles). An SJT specification (e.g. scenario content, response instructions and format) may differ depending on the level of job knowledge required. Research consistently shows that SJTs are usually found to be positively received by candidates compared to other selection tests such as cognitive ability and personality tests. Practically, SJTs are difficult to design effectively, and significant expertise is required to build a reliable and valid SJT. Once designed however, SJTs are cost efficient to administer to large numbers of candidates compared to other tests of non-academic attributes (e.g. personal statements, structured interviews), as they are standardised and can be computer-delivered and machine-marked.

141 citations


Cites background from "Measuring Procedural Knowledge More..."

  • ...Some researchers have developed a single-response SJT format, whereby only one response option is given as part of the scenario (Motowidlo et al. 2009; Martin & Motowidlo 2010)....

    [...]

  • ...…indicates promising validity of the single-response response SJT format as a measure of procedural knowledge and as a predictor of job performance (Motowidlo et al. 2009; Martin & Motowidlo 2010; Crook et al. 2011), however further research is required to ascertain the reliability and long-term…...

    [...]

Journal ArticleDOI
TL;DR: Situational judgment tests (SJTs) are typically conceptualized as contextualized selection procedures that capture candidate responses to a set of relevant job situations as a basis for prediction as discussed by the authors.
Abstract: Situational judgment tests (SJTs) are typically conceptualized as contextualized selection procedures that capture candidate responses to a set of relevant job situations as a basis for prediction. SJTs share their sample-based and contextualized approach with work samples and assessment center exercises, although they differ from these other simulations by presenting the situations in a low-fidelity (e.g., written) format. In addition, SJTs do not require candidates to respond through actual behavior because they capture candidates’ situational judgment via a multiple-choice response format. Accordingly, SJTs have also been labeled low-fidelity simulations. This SJT paradigm has been very successful: In the last 2 decades, scientific interest in SJTs has grown, and they have made rapid inroads in practice as attractive, versatile, and valid selection procedures. Contrary to their popularity and the voluminous research on their criterion-related validity, however, there has been little attention to developing a theory of why SJTs work. Similarly, in SJT development, often little emphasis is placed on measuring clear and explicit constructs. Therefore, Landy ( 2007 ) referred to SJTs as “psychometric alchemy” (p. 418).

87 citations

Journal ArticleDOI
TL;DR: Situational judgment tests (SJTs) have become popular selection methods, with 59 empirical studies having been published since 1990 as discussed by the authors, and a review is organized around a single question: What are the current practices in SJT research? Using this question as a foundation, the content analysis on three significant theoretical and practical themes: (a) SJT development, scoring methods, and uses; (b) types of reliability estimates reported for SJTs, and (c) attributes that enhance or reduce internal consistency reliability.
Abstract: Situational judgment tests (SJTs) have become popular selection methods, with 59 empirical studies having been published since 1990. In contrast to prior narrative reviews or meta-analyses, this study develops (a) a comprehensive structure of SJT features, or “attributes,” (b) uses this structure to quantitatively and qualitatively summarize existing research in a content analysis, and then (c) uses the content analysis to generate directions for future research. The review is organized around a single question: What are the current practices in SJT research? Using this question as a foundation, we focus the content analysis on three significant theoretical and practical themes: (a) SJT development, scoring methods, and uses; (2) types of reliability estimates reported for SJTs, and (3) attributes that enhance or reduce internal consistency reliability.

79 citations

Journal ArticleDOI
TL;DR: Verbal protocol analyses confirmed that high scorers on SJTs without situation descriptions relied upon general rules about the effectiveness of the responses, and suggested that judgment in SJTs was more situational when items measured job knowledge and skills and response options denoted context-specific rules of action.
Abstract: Whereas situational judgment tests (SJTs) have traditionally been conceptualized as low-fidelity simulations with an emphasis on contextualized situation descriptions and context-dependent knowledge, a recent perspective views SJTs as measures of more general domain (context-independent) knowledge. In the current research, we contrasted these 2 perspectives in 3 studies by removing the situation descriptions (i.e., item stems) from SJTs. Across studies, the traditional contextualized SJT perspective was not supported for between 43% and 71% of the items because it did not make a significant difference whether the situation description was included or not for these items. These results were replicated across construct domains, samples, and response instructions. However, there was initial evidence that judgment in SJTs was more situational when (a) items measured job knowledge and skills and (b) response options denoted context-specific rules of action. Verbal protocol analyses confirmed that high scorers on SJTs without situation descriptions relied upon general rules about the effectiveness of the responses. Implications for SJT theory, research, and design are discussed.

67 citations


Cites methods from "Measuring Procedural Knowledge More..."

  • ...Another example is the use of single-response SJTs (Crook et al., 2011; see also Motowidlo et al., 2009; Motowidlo, Martin, & Crook, 2013)....

    [...]

References
More filters
Journal ArticleDOI

8,493 citations


"Measuring Procedural Knowledge More..." refers methods in this paper

  • ...In contrast to the high level of SME and investigator effort required to construct multiple-response SJT items, singleresponse SJT items can be developed much less laboriously by following well-known procedures for developing performance dimensions and rating scales based on critical incidents (Flanagan 1954)....

    [...]

Journal ArticleDOI
TL;DR: The International Personality Item Pool (IPIP) as mentioned in this paper has been used as a prototype for public-domain personality measures, focusing on the International personality item pool, which has been widely used for personality measurement.

2,822 citations


"Measuring Procedural Knowledge More..." refers methods in this paper

  • ...In contrast to the high level of SME and investigator effort required to construct multiple-response SJT items, singleresponse SJT items can be developed much less laboriously by following well-known procedures for developing performance dimensions and rating scales based on critical incidents ( Flanagan 1954 )....

    [...]

Journal ArticleDOI

815 citations


"Measuring Procedural Knowledge More..." refers methods in this paper

  • ...Following practices described by Smith and Kendall (1963) , the incidents and their preliminary dimensions are often analyzed further to develop behaviorally anchored rating scales....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a test de simulation de faible fidelite pour selectionner a l'entree les dirigeants de l'industrie des telecommunications is presented.
Abstract: Creation d'un test de simulation de faible fidelite pour selectionner a l'entree les dirigeants de l'industrie des telecommunications. La simulation montre des postulants en diverses situations de travail et 5 alternatives de reponse pour chaque situation. Les sujets choisissent la reponse qu'ils feraient le plus volontiers et le moins volontiers dans chaque situation. Correlations avec les evaluations de la performance des superviseurs

492 citations


"Measuring Procedural Knowledge More..." refers background or methods in this paper

  • ...Regarding the development of multiple-response SJT items, Motowidlo et al. (1990) described a process involving three major steps....

    [...]

  • ...An example of an SJT item with five response items appears below (from Motowidlo et al. 1990 ):...

    [...]

Journal ArticleDOI
TL;DR: This article reviewed the history of such tests and presented the results of a meta-analysis on criterion-related and construct validity, concluding that situational judgment tests typically evidence relationships with cognitive ability, particularly in terms of recent investigations into tacit knowledge.
Abstract: Although situational judgment tests have a long history in the psychological assessment literature and continue to be frequently used in employment contexts, there has been virtually no summarization of this literature. The purpose of this article is to review the history of such tests and present the results of a meta-analysis on criterion-related and construct validity. On the basis of 102 coefficients and 10,640 people, situational judgment tests showed useful levels of validity (rho = .34) that were generalizable. A review of 79 correlations between situational judgment tests and general cognitive ability involving 16,984 people indicated that situational judgment tests typically evidence relationships with cognitive ability (rho = .46). On the basis of the literature review and meta-analytic findings, implications for the continued use of situational judgment tests are discussed, particularly in terms of recent investigations into tacit knowledge.

400 citations


"Measuring Procedural Knowledge More..." refers background in this paper

  • ...Although situational judgment tests (SJTs) have been shown to be valid predictors of job performance (e.g., McDaniel et al. 2001 ), they are time consuming and expensive to develop, largely because SJT items usually have multiple response options that are often difficult to create....

    [...]