scispace - formally typeset
Search or ask a question

Showing papers by "Janet B. W. Williams published in 2004"


Journal ArticleDOI
TL;DR: Empirical methods should be applied to the assessment of changes that purport to improve clinical utility in future revisions of DSM to clarify whether the advantages of changing the diagnostic criteria outweigh potential negative consequences.
Abstract: OBJECTIVE: Changes in DSM-IV were guided by empirical data that mostly focused on improving diagnostic validity and reliability. Although many changes were made explicitly to improve clinical utility, no formal effort was made to empirically determine actual improvements in clinical utility. The authors propose that future revisions of DSM empirically demonstrate improvement in clinical utility to clarify whether the advantages of changing the diagnostic criteria outweigh potential negative consequences. METHOD: The authors provide a formal definition of clinical utility and then suggest that the merits of a proposed change to DSM be evaluated by considering 1) its impact on the use of the diagnostic system, 2) whether it enhances clinical decision making, and 3) whether it improves clinical outcome. RESULTS: Evaluating a change based on its impact on use considers both user acceptability and accuracy in application of the diagnostic criteria. User acceptability can be measured by surveying users’ reactio...

305 citations


Journal ArticleDOI
TL;DR: Despite the lack of empirical data supporting acquisition of a specific set of rater skills, it is believed that the following general skills are essential to conduct a competent clinical interview using clinician-administered symptom rating scales.
Abstract: The growing rate of failed clinical trials in neuroscience has led to increased attention being paid to methodologic factors that may contribute to this failure. An issue that has been largely overlooked is that of rater training and rater competency. Given that scores on clinician-administered symptom rating scales form the foundation on which the success of a study is built, it is surprising that so little attention has been paid to this issue. There are several issues related to rater training and rater competency, which warrant examination. These include who is qualified to rate, what components should be included, how and when training should be provided, and, perhaps most important, whether rater training is effective. 1. Who is qualified to administer outcome measures in clinical trials? There is great variety in the backgrounds and experience of persons administering rating scales in clinical trials conducted in the United States, ranging from psychiatrists to study coordinators with bachelor’s degrees (often in fields unrelated to psychiatry) with little, if any, clinical experience. Academic credentials alone fail to insure competence in this area, because few formal academic programs include training on the use of the clinician-administered rating scales that are typically used in clinical trials. Most training that does occur takes place at the investigative site. Despite the lack of empirical data supporting acquisition of a specific set of rater skills, we believe that the following general skills are essential to conduct a competent clinical interview using clinician-administered symptom rating scales: Conceptual understanding. Raters should have didactic training in psychopathology (particularly in the disorder of interest), so they have a good conceptual understanding of the constructs being evaluated. An example of such training would be a course in psychopathology that covers current theories of depression, including diagnostic constructs and criteria. Clinical experience. Raters should have enough clinical experience with patients who have the disorder being evaluated at all levels of severity to recognize and judge the severity of each of the symptoms rated in the scale (eg, ‘‘if he does not know what retardation is, he will be unable to recognize it when it is present and unable to rate it’’). Unfortunately, some raters get little training before seeing patients in clinical trials and often learn on clinical trial patients, with clinical trial data. Often, it is their first exposure to patients with the disorder being studied. Guest Editorial

48 citations