scispace - formally typeset
Open AccessJournal Article

Prevention Programs and Scientific Nonsense.

Dennis M. Gorman
- 01 Feb 2003 - 
- pp 65
Reads0
Chats0
TLDR
In this article, Sommers pointed out what I considered to be serious methodological flaws in the evaluations of these programs, especially with regard to the recruitment and retention of participants and/or the use of questionable analytical techniques such as one-tail significance tests and post hoc sub-group analysis.
Abstract
IN NOVEMBER 2001, a bizarre incident occurred at a conference sponsored by the Center for Substance Abuse Prevention (CSAP) in Washington, D.C. As reported by Stanley Kurtz on National Review Online (December 5 and 11, 2001) and Sally Satel on Tech Central Station (December 7, 2001), Christina Hoff Sommers of the American Enterprise Institute was invited to address an assembled audience of CSAP staff, grantees, and consultants concerning the agency's intentions to fund "Boy Talk," a prevention program for young men designed to influence such behaviors as drug use and violence. Sommers is a critic of this type of gender-specific exercise in health education, and suggested to the CSAP audience that the effectiveness of such programs and the commitment of dollars to them should be informed by scientific evidence regarding their effectiveness. She observed that "Girl Power!"--the program on which "Boy Talk" was based--remained unsupported by empirical evidence despite having been the recipient of federal funds fo r the past six years. At this point, a CSAP official informed Sommers that she should end her presentation, apparently because the "Girl Power!" program was off limits. Whether Sommers had been informed of this taboo before she began her talk is not clear, but given the association between the two programs it seems reasonable that the experiences concerning one be used to inform implementation of the other. Anyway, Sommers appears to have soldiered on with her presentation, for within minutes she was apparently instructed to "shut the f--- up, bitch" by a member of the audience, causing much merriment among the assembled group of professionals. Over the summer, I had a similar, although not as overtly censorial or hostile, experience while presenting at the Tenth Annual Meeting of the Society for Prevention Research in Seattle, Washington. While nobody dropped the F-bomb on me or called me "bitch," the response to my attempt to examine the scientific base of some widely advocated prevention programs was an ad hominem attack coupled with defensive arguments justifying the violation of basic tenets of evaluation practice in prevention research. Science and the learned society MY PRESENTATION AT the Society for Prevention Research conference focused on a list of drug and violence prevention programs deemed by an expert panel convened by the U.S. Department of Education's Safe, Disciplined, and Drug-Free Schools program to be "exemplary." According to the criteria used by this panel to define exemplary, there must be at least one evaluation that has demonstrated an effect on some violent or drug-related behavior, and this evidence must come from a "methodologically sound evaluation." In my presentation, I critiqued the Education Department's criteria through an examination of three of the programs that were conferred exemplary status - Project ALERT, the Second Step curriculum, and the Adolescent Training and Learning to Avoid Steroids (ATLAS) program. I pointed out what I considered to be serious methodological flaws in the evaluations of these programs, especially with regard to the recruitment and retention of participants and/or the use of questionable analytical techniques such as one-tail significance tests and post hoc sub-group analysis. In addition, I noted that while there were indeed isolated effects on behavioral outcomes to be found in the evaluations of these programs, the preponderance of evidence showed that they had little or no effect on drug use or violence. For example, the evaluation of the violence prevention program Second Step produced only one statistically significant result out of the 20 outcomes that were assessed at final follow-up, while two evaluations of the ATLAS steroid prevention program failed to demonstrate any effects on steroid use. The annual conference of the Society for Prevention Research seemed an appropriate venue to discuss such issues. …

read more

Citations
More filters
Journal ArticleDOI

Drug and violence prevention: Rediscovering the critical rational dimension of evaluation research

TL;DR: In this paper, the authors argue that while certain designs are clearly better than others in dealing with threats to internal validity and allow for better generalization of results beyond the study population, utilization of these designs in and of itself is not sufficient to designate an evaluation study as "scientific".
Book ChapterDOI

Conflicts of Interest in Public Policy Research

TL;DR: In this article, the authors discuss the difficulty of sustaining an inquisitorial system of policy research and analysis when it is embedded in a broader adversarial political setting, and encourage norms of "heterogeneous adversarialism" in which investigators strive for within-study hypothesis competition and greater clarity about roles, facts, and values.
Journal ArticleDOI

Replacing ineffective early alcohol/drug education in the United States with age‐appropriate adolescent programmes and assistance to problematic users

TL;DR: A new approach to drug education for adolescent students seems warranted as a positive alternative to personally intrusive surveillance and an interactive approach at the secondary school level that incorporates an age-appropriate educational process is proposed.
Journal ArticleDOI

Science, pseudoscience and the need for practical knowledge.

TL;DR: Graham contends that the practical needs of communitiestrying to deal with problems related to drinking inlicensed premises would be better served by a change in the way in which researchers design evaluations of community prevention programs.
Journal ArticleDOI

Developing methods to compare low-education community-based and university-based survey teams.

TL;DR: It is suggested that it is possible to study the relative quality of community and university-based teams in terms of data collection and that the two types of teams may be roughly comparable.